- Mastering Azure management: A comparative analysis of leading cloud platforms
- Sweat the small stuff: Data protection in the age of AI
- GAO report says DHS, other agencies need to up their game in AI risk assessment
- This LG Bluetooth speaker impressed me with a design feature I've yet to see on competitors
- Amazon's AI Shopping Guides helps you research less and shop more. Here's how it works
Revealed: How Banking and Finance GRC Leaders Struggle to Address Regulators’ Demands for Cyber Evidence with Confidence – Cyber Defense Magazine
By Charaka Goonatilake. CTO at Panaseer
It’s one thing to keep data secure and assets protected, but another thing entirely to have the evidence at hand to prove your security controls coverage and its effectiveness to third parties.
And when those third parties include financial regulators with the power of life and death over your organization’s trading license, answering their questions accurately, confidently, and in a timely manner is everything.
Keeping on top of regulators’ demands for cyber-related data is perhaps the most business-critical function of a bank’s or financial services company’s GRC (governance, risk and compliance) department. However, according to intensive research conducted for Panaseer among a cohort of 200 well-placed GRC leaders at 5,000+ employee finance institutions on both sides of the Atlantic, all is not well with how they and their teams address these issues. Within the research findings, described in more detail below, a picture emerges of GRC teams grappling with growing volumes and complexities of data requests, and with signs that the labor-intensive methods they have traditionally employed for dealing with regulator requests are becoming serious causes for concern.
Searching questions are not simple to answer
Behind each regulatory request is a simple guiding principle on the part of the regulator: ascertaining the organization’s true security posture in the context of specific legislation. The old adage “the simplest things are the most complicated” rings very true here; particularly as IT and business infrastructures at these organizations are so vast and interwoven. Also, that the complex and often urgent nature of the inquiries means there is seldom an efficient or repeatable way of addressing them through non-automated means.
Unfortunately, standard GRC tools are not fully automated; they typically rely on significant manual input. Furthermore, they do not provide complete insight into the current status of security controls coverage, the performance of those controls, and – crucially – any gaps between them.
This lack of consolidated visibility into all assets – devices, applications, user accounts, databases, etc. – across the enterprise makes it difficult for GRC teams to pinpoint control coverage gaps and external regulatory policy compliance.
This is highly problematic because answers to regulators’ questions will invariably lie in data scattered across the organization. Much of what GRC teams need to compose their responses to regulatory questions will come from data collected by security colleagues (see below), but in any case, GRC tools are geared up to obtain subjective data collated via qualitative questionnaires that build an approximated picture from representative samples rather than reflecting the full, quantifiable reality. Incomplete and/or unreliable information prevents any clear assurance of whether the relevant controls are deployed and operating on all assets.
Requests are coming thick and fast
Financial institutions have plenty of cyber-related regulations to worry about and, for the largest in particular, the number grows almost by the month. Data privacy laws, as just one example, are now in force in 120 countries. This puts acute pressure on the GRC departments of international institutions, for whom local regulations apply regardless of whether their operations in a certain national jurisdiction constitute a major or a minor presence.
We know that these increasingly cyber-related requests, and the difficulty in addressing them autonomously with existing GRC toolsets, is creating friction between GRC teams and their cyber colleagues. A separate Panaseer study polled a group of 420 CISOs at large financial institutions about these knock-on effects and found – on average – GRC teams were requesting metrics from security once every 16 days, at a cost of up to 5 days per month being diverted away from front-line cyber fighting resources. A total of 29 percent claimed risk teams demand data from them every single day.
Data accuracy and request volume are the biggest GRC cyber challenges
In our GRC leaders peer survey, “access to accurate data” and “number of report requests to deal with” were cited as the top two security challenges.
The number one issue is accurate data (or rather, a lack of it), cited as the most significant security issue by more than one-third (35 percent) of respondents. This appears to be a bigger problem among the smaller institutions surveyed, with 40 percent of those employing between 5,000 and 9,999 people placing it first versus 33 percent at those with 10,000+. This disparity could be explained by the sheer scale of manually-intensive resources that the largest institutions are able to call upon to collate richer data and invest time validating it. In any case, it’s clear that the same difficulties in grappling with complexity and sprawl afflict smaller institutions despite having fewer endpoints, applications, and systems than their larger peers.
The response “number of report requests to deal with, understanding and clarity of report requests” was cited as the greatest security challenge by 29 percent of respondents.
More GRC leaders should be more confident in data shared with regulators
The magnitude of these challenges is borne out in the apparent lack of supreme confidence GRC leaders have about the quality and timeliness of the data provided to regulators in response to requests. It is worth remembering that these are some of the largest and most advanced financial institutions in the world, with enormous resources and an acute sensitivity to the needs of maintaining a spotless regulatory compliance record that never risks harm to their public reputations or continuity of business operations.
With all that being said, only 39 percent of respondents stated they were “very confident” in the accuracy of security data provided to regulators on request. More staggeringly still, a further 7 percent admitted they were “neither confident nor unconfident”, which any fair-minded observer would have to agree constitutes something of a damning indictment.
It doesn’t get much better in terms of the confidence levels GRC leaders have for responding to regulatory requests quickly enough. Here, far less than half (41 percent) claimed to be “very confident” in their ability to fulfill the security-related requests of regulators in a timely manner.
These are not the responses one would expect of senior risk and compliance professionals presiding over slick, well-functioning processes. Another finding compounds this troubling perspective: only 27.5 percent of respondents said they were “very satisfied” that their organization’s security reports align to regulatory compliance needs like GDPR and CCPA.
Too manual, meaning too inefficient, prone to errors, and lacking context
The tools that GRC teams commonly use to collate data in response to regulatory requests rely heavily on qualitative questionnaires. Some questions will be binary, others are significantly more detailed. As outlined above, this will be owing to the absence of a vigorous, data-driven (bottom-up) approach to establishing the on-the-ground reality of which security controls are in place, what they cover and how they are operating. Rather, these questionnaires feed into a process that seeks to establish whether certain parameters are in place by garnering attestation from stakeholders and by sampling data.
There are many limitations to such a manual, questionnaire-driven approach, including:
- Massively inefficient – The largest institutions may employ 100 people or more to manually undertake qualitative compliance checks. Consider for a moment how wasteful that is, and how lacking in scalability in the face of yet greater requirements. Most organisations have automated some aspects of their processes according to our survey (more details below), with 2.5 percent automating none whatsoever.
- Lacking in context – GRC tools cannot isolate and identify applications associated with particular business processes, or the interrelationships between assets and the people who interact with them, or – more to the point – the impact that risks posed by these factors may have on the business. The disconnected, check-box nature of qualitative assessment makes it all but impossible to assess the total, cumulative risk generated by ‘toxic combinations’ of risk factors. Our survey found a groundswell of support for improvement in this regard, with 30 percent agreeing the ability to prioritize risk remediation based on impact to the business is “very important” and a further 66 percent as “somewhat important”.
- Too much subjectivity – Qualitative questionnaires lead to evidence significantly more subjective than objective. Sampling also leads to less reliable results than an approachable to take in the full picture. Other accuracy issues include the potential for human error, bias or even abuse that must be considered when employing a non-automated system.
- The point in time rather than real-time, all the time – The results of such manual processes give only a ‘point-in-time’ estimation of compliance posture, which may be sufficient to satisfy the request but which will need the same process repeated again and again whenever the same verification is sought.
In our GRC leaders study, 92 percent of senior risk and compliance professionals responded positively to the value of harnessing both quantitative and qualitative security controls assurance, reflecting the strong appetite for an improved toolset.
Attitudes to automation are encouraging
While GRC leaders may be labouring under a broken, inefficient, and ‘top-down’ system, there is plenty of evidence from our research to suggest they are progressive in their outlook toward more streamlined, automated, and comprehensive methods of surfacing security metrics.
One of the reasons for this is expediency, with the tightening effect of increasingly stringent legislative requirements making the search for alternative approaches more pressing. Recent examples of this, such as the Monetary Authority of Singapore (MAS) Notice 655 on Cyber Hygiene (which calls for banks to attest to having endpoint detection and response software deployed and operational on every asset, at all times), reflect a heightened level of expectation on the part of regulators that such requests should not be considered unreasonable.
Automating processes would go a considerable distance to solving these challenges, but our survey found there is some way left for organizations to go. A total of 93.5 percent of GRC leaders agreed that it is important to automate security risk and compliance reporting, but only 26 percent have so far achieved it. And while those instances where data collection (49 percent of respondents) and data analysis (67 percent) processes are being automated represent good news, until full automation arrives there will still remain many of the problems associated with manual processes, such as human error and inefficiencies in achieving pace and scale.
Rethinking the GRC toolset with CCM
The whole challenge of responding to regulatory requests would be alleviated by GRC tools that can harness accurate data in an automated rather than manual way, access the required information without dragging overstretched cyber teams into the fray, and easily transform it into the formats different regulators demand.
With a consistent up-to-date view of security controls deployments, the accuracy and timeliness of responses will be improved since assessments will be derived from instrumentation instead of subjectivity.
The latest Gartner Hype Cycle for Risk Management details a new technology that promises to deliver this capability. Called ‘Continuous Controls Monitoring (CCM)’, Gartner defines it as: “…a set of technologies that automates the assessment of operational controls’ effectiveness and the identification of exceptions”.
Purpose-built CCM tools sit on top of existing tooling, ingest data from across security, IT, and business tools, and can clean, normalize, and de-duplicate data before correlating aggregated data to individual assets. They can also integrate with GRC tools to automatically populate them with security controls assurance data.
By using CCM to align security controls with framework standards, GRC teams can track and report adherence to best practice standards and regulatory mandates.
The compelling benefit of CCM is its ability to reflect “what’s really going on” in a fast and non-disruptive way, uncovering gaps in security controls deployment coverage wherever they are, and preventing even the merest suggestion that the organization’s risk management is itself ‘risky’.
That’s something that benefits every aspect of the organizations charged with upholding the best practice policies of security and compliance, from GRC leaders and cyber teams all the way up to the leadership of the business.
About the Author
Charaka has spent the last 5 years engineering and building Hadoop-based security analytics applications to detect Cyber threats. He led a team on business development for the BAE Systems CyberReveal product to over 40 clients in Financial Services, Technology, Telecommunications, Energy, Pharmaceuticals, and Foreign Government based across EMEA, North America, and APAC.
Charaka is the brains behind our big data technology. His team leads the way in generating innovative techniques for deriving new security insight for our customers.
First Name can be reached online at @charakag
and at our company website http://panaseer.com/