In Part 1 of this GDPR blog series, we looked at the PII threat landscape and the legal & financial consequences of technology failure. In Part 2, we highlighted the major provisions in the GDPR for technological measures to protect data. In Part 3, we examined some of the guiding principles to be considered in relation to the technological impact of the GDPR within an organization. In Part 4, we looked at the timelines for implementation.
In the final post of this series, Max Pritchard explores the technologies that organisations should be investing in; with a round-up of his top ten technology must-haves.
Technology service areas for GDPR compliance
It is impossible to create a single digestible document that might comprehensively cover a general plan to meet EU GDPR compliance for any possible organisation. The following is a digest of likely “hot topics”, arranged in a narrative structure, with an effort to be informed about recent evidence from real breaches in data security.
Applications used to collect, store and process personal data
GDPR mandates the provision of modern applications to govern the business processes of handling personal data about EU residents. These applications need to be demonstrably designed with security by design and by default – but also meet the various rights of the EU resident; the right of access, the right to rectify, the right to erase, the right to restrict processing, and the right to transfer, among others.
Mapping an organisation’s need for personal data, establishing and tracking consent, or other legal basis for holding and processing that data and keeping appropriate records of consent, activity logs and access is likely to be, of itself, a major task, particularly if handled in-house. Organisations should be challenging their IT applications provider for details of how these requirements are met in current applications – or consider moving to a provider which can meet these requirements.
As a consequence of GDPR, there is a new breed of cloud-based services available that are specifically designed to provide companies that lack the in-house capability, to meet GDPR requirements for consent capture and tracking, for example.
Dodging the bullet – PCI DSS, scoping and GDPR
One thing that working with PCI DSS in the retail and e-commerce industries teaches you very quickly is that an opening step to minimise the cost of compliance is to render as much of the corporate network out of scope to minimise the footprint for audit and certification. The same approach might be adopted in GDPR compliance.
First is the ruthless extemination of extraneous data relating to EU residents. If it is not absolutely required – don’t collect it in the first place. The second technique might be the pseudonymization of personal data. This means separating data that can uniquely identify an individual from other operational data about them or the transaction they are engaged with.
Pseudonymisation tools can reduce the risks associated with breaches somewhat, but GDPR takes the view that if the data can conceivably be combined with other data to uniquely identify an EU resident, then it is still in-scope. Only fully anonymisation, that is complete separation of personal data from other associated data, would constitute sufficient technological process to render a data set out of scope. There can be no way of recombining the data with other data to identify a specific natural person.
A third technique is a general approach to limiting the number of people, applications, third parties and trust relationships that are involved or trusted by the operational personal data management system. Also to reduce the complexity of code associated with data management. In short – simplify.
Regardless of attempts to minimise the attack surface area and extent of data collection and processing, inevitably most businesses will find themselves with an amount of data regarding EU residents that needs securing and so the challenge remains. How do you go about securing personal data?
Securing personal data
Outside of the operational everyday use of applications that process or store personal data, one needs to ensure their confidentiality, integrity, and availability (CIA). The EU GDPR is notable in not being prescriptive in the technology, techniques or tools to be employed by business in this task. Given the rather fragmented nature of the network security industry – it’s also difficult to find good information about a systematic approach to achieving compliance.
There are a variety of cybersecurity frameworks available that can help guide an organisation’s approach to reviewing their protection of data covered by the GDPR. For example; US National Institute for Standards and Technology (NIST), RFC2196 the IETF site security handbook, and ISO27001.
The NIST cybersecurity framework, for example, establishes five core functions in managing cybersecurity risk; Identify, Secure, Detect, Respond and Recover. Breaking the problem down into smaller areas to focus on is a reasonable idea.
You may also like: GDPR Webinar for IT, Security & Privacy Practitioners
Solution area #1 – Discovery, asset classification, data-loss prevention (DLP)
The first step in most information security frameworks is developing an understanding of what you are protecting. In a limited sense, identifying what personal data is held by an organisation, where it is, what applications can access it. More modern and larger organisations have increasingly complicated digital estates. Virtualisation, cloud services, SaaS, peer-to-peer applications and many other innovations has made identifying where data is held, which device holds it, what network it traverses and even, in some cases, which CPU is processing it, an ongoing process of discovery, rather than a one-off audit.
Copies of personal data are not only found in operational systems, but also in test or development environments, analytics and log servers, exported onto company laptops or mobile devices, replicated into cloud storage or collaboration tools and even backed up remotely. Modern serverless cloud services are even challenging the assumed integrity of the bus that connects the CPU to the RAM and storage within a hardware device – now individual computing functions can be outsourced to a third party and processed on a generic processor in a cloud facility.
In light of the massive volumes of data that exist in unstructured forms across an organisation, and the ever increasing diffusion of the infrastructure that is employed to process and store it, a technological response to locate, identify, catalogue and classify all of the pertinent data sources is an essential step and lays the foundation for taking action of protecting the data.
Data Loss Prevention (DLP) Applications that sit on endpoints and servers promise the capability to track down, classify and block sensitive data at a device level. Effective tools, by nature, can be monstrously complicated and aimed more at enterprises. Enterprises may also make use of “outside-in” scanning, which looks for devices visible to the Internet or containing data pertaining to company trademarks, addresses or sites. There are also software visibility tools that can track cloud server deployment on AWS and Azure, for example, and spot when servers are outside the corporate perimeter.
Discovery services build asset databases that can then be used to inform risk analysis and testing programmes.
Solution area #2 – Encryption
Assuming that breaches are inevitable, one of the first questions that will be asked of any organisation after loss of data is “Was the data encrypted?” The GDPR specifically mentions encryption and it is one of the few specific technologies called out in the text of the GDPR.
If an organisation has lost data in a breach and has not encrypted it, then that would be a clear indication that they have failed to meet the standards of protection required by the GDPR – because not only did the security fail, allowing the breach, but they had not understood that 100% information security is not possible to achieve. Organisations will find it difficult to refute an argument that they have not taken proportional steps to secure personal data.
All personal data, at rest or in motion, should be encrypted to minimise the damage caused by a breach.
The main task in implementing encryption systems is identifying and classifying the data to be encrypted and then the right application or device to do the encryption. It is easy to forget personal data potentially held in unstructured form – e-mail servers and recordings of telephone calls – or support process information such as back-ups and any copies used for development or testing.
There has recently been a spate of databases containing personal data appearing on insecure cloud servers when a third party or developer has quickly spun up a storage instance to stash a database for some work project, and then forgotten about it.
Encryption can be bypassed and the two most common methods are by malware intercepting the data outside of its encrypted state or the prior theft or use of compromised credentials used to access the data. So these areas should be considered next.
Solution area #3 – Credentials, authentication and access management
In the 2016 Verizon Data Breach Report, 80% of actual data breaches analysed came from outside of the company from which the data was stolen. The number one tool employed was the use of stolen, weak or default credentials to gain access (involved in 63% of confirmed breaches). For systems that are used to process or access stored personal data, passwords are no longer sufficient (if they ever have been).
Multi-factor authentication (MFA), particularly for key applications and privilege levels (administrators, for example) might have prevented some extraordinary breaches such as security consulting and audit firm Deloitte Touche Tohmatsu suffering from a breach involving an unauthorised actor having administrative access to their entire email system for an indeterminate period of time – certainly months.
However evidence from the market suggests that MFA is not employed as widely as it should be. Dropbox revealed in 2015 that only 1% of Dropbox accounts were protected by multi-factor authentication. For many users, MFA is simply annoying and restrictive – but failing to employ it in systems handling personal data, or supporting functions such as IT, is a big miss.
In addition to MFA, to reduce the impact of the loss of credentials, companies will want to educate users about good password practices, employing password management tools, registering company domains with databases of breached username/passwords in historical breaches and testing user passwords against brute force tools or assessing whether they are re-using passwords between home and work accounts.
IT and security teams also need strong procedures and tools around installing new IP devices on networks, or creating new instances of storage or processing capability using cloud services. IoT devices such as CCTV cameras, screens and environmental telemetry controls have been known to come with static default passwords or firmware flaws and in tests, such devices can be compromised within 90 seconds of being becoming visible on a network.
The use of bogus credentials has also been apparent in social engineering attacks such as Business E-mail Compromise (BEC) where company administrators have been fooled into releasing sensitive data simply by someone requesting it under the guise of a board member or trusted colleague.
Controls: At best, passwords offer only weak protection regardless of apparent length and strength. Multi-factor authentication should be mandated on all administrative business functions of any importance as well as business communication hubs (social media, for example). Education and password management tools should be augmented. Register company domain on haveibeenpwned.com or similar.
Solution area #4 – Anti-malware
Malware was the second most common tool involved in data breaches in 2015 (according the the Verizon data breach survey). Here ‘malware’ includes viruses, worms, trojans, backdoors, keyloggers, RAM scrapers and other software designed with malicious intent. In any form, malware is a great risk to the security of business networks in general and personal data specifically.
There are a number of mechanisms commonly involved in malware outbreaks. Phishing e-mails with infected attachments, or containing links to websites with malware laced in the content downloaded, malware woven into web-based adverts or hidden in innocuous applications available for users of mobile devices and exploits of software vulnerabilities, even USBs left near corporate offices labelled “private photos” or similar.
Tools employed to try and control malware outbreaks include, but are not limited to, e-mail filtering, anti-malware gateways, endpoint protection, application patching, restricting introduction of unauthorised hardware/storage and LAN segmentation to limit the spread of an infection or worm.
Encryption may not protect a company from malware since Ransomware threatens the availability of data, RAM scrapers and keyloggers operate outside of the traditional encryption envelope and credentials pinched by malware might be used to access data in its unencrypted format.
Increasingly even regularly updated hash or signature-based malware detection is ineffective. Network and endpoint behavioural analysis (H-IPS, N-IPS) and machine learning are driving the solution and attack complexity upwards.
Radical solutions in high-security environments have to consider physical segmentation of networks from those used for outside communications – but there are families of malware designed to exfiltrate data from air-gapped networks. Demonstrated techniques have included several categories of out-of-channel communications including data transmission through acoustic, light, EM, magnetic and other more exotic methods.
Typically more of interest to academics and military organisations, it is nevertheless a useful exercise in “blue team” security design to imagine systems robust against sophisticated actors – also a cautionary note for other organisations of the challenges in preventing breaches.
Solution area #5 – E-mail
E-mail was the killer application that motivated organisations to embrace Internetworking in the 1980 and 90s. Its use in business is now near-ubiquitous and when an organisation bans use of e-mail by staff it makes the news. Where e-mail has been cut, typically because of concerns about employee productivity and work-life balance rather than security, e-mail is often replaced with a slew of new applications – chat and collaboration tools.
As a ubiquitous business tool with the capability to transfer data in, out and within the business, it has to be well-policed. Threats to information security by e-mail are many, varied and now so common they border on the mundane. Organisations should be on the look-out for malware in attachments, links to malware-infested web-sites, BEC phishing and whaling as well as sensitive data being sent out of the company as attachments or embedded within the body of an e-mail.
Over-exposure to lists of cyber security dos and don’ts and repeated training on threats via email has had an impact in the efficacy of “the human firewall”. In tests of business workers, nearly 30% of phishing emails are opened and over 1 in 10 people (12%) click on attachments they were not expecting. Only 3% of phishing emails were reported to management. Users clearly need help rather than further punitive awareness training videos.
E-mail databases are often replete with personal data, and are critical enough for the company to have invested time and effort on backing up the data. GDPR compliance will probably mean a lot of attention being given to the cultural use of e-mail within the organisation, the policies around data encryption, attachment handling, inbound and outbound filtering and long-term archiving and retrieval.
Controls: E-mail filtering – block undesirable attachment types. Anti-malware and endpoint protection. Employee education. LAN segmentation. Multi-factor authentication. Data loss prevention and outbound monitoring.
Solution area #6 – Web
Web browsing is nearly 28 years old and it has come along way since NCSA Mosaic and Netscape in the nineties. For many today the world wide web and the application ecosystems that it supports ARE the Internet. It is nearly as ubiquitous as e-mail as a communication tool.
Employees browsing the web is an inbound vector for malware and outbound vector for, possibly sensitive information, sent to remote and unknowable web servers and uncountable third-party plug-ins. Companies also have estates of devices and third party services that use web control panels for administration. Company web servers offer businesses a way of automating contact with consumers and may form part of a technology solution for managing personal data.
So whether it is protecting web users within the workforce, preventing abuse of browsing facilities, defending company web applications inside and outside of the company network, the web is deserving of special attention. Speaking of web servers takes us onto the next solution area.
Solution area #7 – Servers
In actual data breaches, 35% of the assets targeted were company servers. Some were hacked using stolen credentials, others succumbed to zero-day or other vulnerabilities and still more were exploited by common web hacks such as SQL-injections and cross-site scripting (XSS).
The number one cause of actual data breaches in 2015 in financial services, entertainment, education, and information markets was web application attacks and it was a significant factor in breaches for manufacturing, professional services and retail sectors as well. If companies begin to use web tools to automate EU GDPR compliance activities such as data export, amendment and portability, web application attacks and abuse of web facilities by bots will increase.
Web servers are commonly connected to databases containing vast repositories of personal data. Alongside critical network protections, company web assets need specific web-application firewalls (WAF) to make sure that organisations don’t fall foul of abuse by bots, or simple web hacks.
Server software is vulnerable to being exploited through flaws in the software itself. Software flaws are published regularly and software vendors develop security patches that are designed to cover the vulnerability and prevent an attacker compromising the server. In the worst-case compromise, an attacker can obtain administrative access and complete control through arbitrary code execution and then escalation of privileges. Lesser attacks may involve the ability to eavesdrop, or simple denial of service.
In many data breaches, attention is drawn to the patching process and policies in place in an organisation. Companies often do not run the latest operating software or application versions as updating systems is complex, expensive and prone to errors making it expedient (or even necessary) to stick with legacy environments.
The recent worm outbreaks across the globe have made it clear that this vector is a concern for hundreds of thousands of businesses. The recent attacks have been for direct material gain (ransomware), but there would be nothing to stop a worm from exfiltrating data from infected systems rather than encrypting it and trying to extort money for its return.
A methodical approach to patches emphasising consistency and coverage beats expedient patching. Published vulnerabilities are exploited quickly – particularly vulnerabilities in Adobe and Microsoft software. One recent report put the median at 30 days. Old vulnerabilities are still heavily targeted and a patch may not be available – as such older systems may need to be isolated from other devices on the network and access restricted to prevent worms or other automated infection.
Routine and frequent vulnerability scanning might show a company what the attacker can already know about their estate and prompt useful remediation activities. Where patches are not available – or have to be subject to scrutiny because they might themselves impact the availability of a web application, companies might need to the tools to rapidly move servers to different security zones on the network with more aggressive filtering and monitoring until patches are available.
Controls: Server discovery. Remove unused services. Firewalling (including WAF). Consistent and comprehensive patching. Isolation of vulnerable servers. Regular vulnerability scans. Management of administrative access. Constant monitoring.
Solution area #8 – DNS
Another critical Internet service that is often considered a weak spot for attack is the Domain Name System – the mechanism by which machine-readable addresses are converted to human-readable words.
Admin account for company domains must be secured to ensure they are not hijacked and people redirected to false websites. DDoS is a constant threat to DNS servers and can result in an effective lack of availability of all online systems that rely on it. There are man-in-the-middle attacks that can see user sessions hijacked and redirected by employing domain names that are similar to, or common mis-spellings of corporate domains. Discovering these kinds of fake or pharming sites and then working to inform and educate customers and shut down offending sites might require technology solutions, particularly for organisations with large estates of domains.
Solution area #9 – Insider threat or privilege misuse
If outsiders are implicated in 80% of actual data breaches, that still leaves a sizable threat from insiders. Breaches may be proportionately rarer from someone inside the business, but they have the potential to be much more damaging. Insiders have knowledge, contacts, trust, and frequently, fewer defences to work around to achieve their objectives.
Financial gain and espionage are the main drivers of insider misuse of data access privileges, but grudges are a factor too. Fewer breaches were found to be because of management and senior staff (14%), and those with privileged access such as IT or security (14%) compared to other levels of seniority and access (35%). Most insider misuse was abuse of existing privileges for unsanctioned use, but there is also a strong threat from data mishandling (copying data to shared external drives), introduction of unsanctioned hardware or software and moving data off company premises using portable media.
Denial of service attacks from insiders has faded in recent years, but staff with administrative access to databases, portals or network devices are frequently trusted with the keys to the kingdom and have the capacity, if not generally the inclination, to fundamentally undermine availability of IT services.
Controls – Make sure privileges stop as soon as an employee stops working for a company. Monitor worker access to sensitive data particularly. Data Loss Prevention tools on common outbound data applications (cloud storage such as Dropbox, The Box and others and e-mail) can help identify data exfiltration attempts. A CASB solution might allow more granular control of user access to public cloud applications. Focus on USB drives and other portable media. Network monitoring for unsanctioned devices and applications to control ‘shadow IT’. Quis Custodiet Ipsos Custodes?
Solution area #10 – Breach detection and incident response
The tail-end of the GDPR requirements is for companies to be in a state of constant readiness to respond with speed and transparency upon detection of a breach in security that has placed personal data at risk. Companies must notify the authorities (the Information Commissioner’s Office – ICO in the UK) when a serious breach is detected and also the impacted users if their personal data is at risk as a consequence of the breach.
Cyber attacks vary wildly in their level of sophistication and subtlety. Technical solutions that could help in detecting subtle breaches include IPS, honeypots, log analysis, and strong security incident event management (SIEM). Regular penetration tests, DDoS tests and vulnerability scans can help identify holes in defence in advance of actual data loss.
For some breaches, however, is is most likely that the first sign of a breach will come from outside the business from law enforcement finding data after a raid or botnet shutdown – or a security researcher identifying a weakness in defences. In any event, a response plan needs to be in place with access to the tools and processes to quickly gather reliable evidence and to ensure required standards of communication to the authorities and the end users occurs in a timely manner.
In publications, the ICO is keen to point out that notification is not required for every breach and it is OK to provide notification of a possible breach in advance of having the full information as long as that information is forthcoming in a timely manner. The more severe the risk, the greater the need and expectation of rapid, accurate and transparent notification.
Other controls to consider
There are many potential other areas of control to potentially consider such as equipment and media disposal, physical security, USB or portable media management, mobile device management, but comprehensive analysis is outside the scope of this document. GDPR is not specific and any technical response is likely to be an evolution of security practises – tightening up controls where personal data is involved – much as in the same way PCI DSS focused SecOps on parts of networks and businesses that handled payment card data.
In networking circles, the Open-standards Interconnect (OSI) 7-layer model is well known and routinely referred to when discussing matters of interoperability and security across network devices. Each layer of the model has a specific function and is reliant or related to layers below.
Information security within an organisation can also be visualised in layers and alongside the NIST cybersecurity framework, could lead to a method of systematically mapping and analysing an organisation’s information environment.
If you would like to arrange a GDPR Readiness Technology Assessment, please call activereach on 0845 625 9025 and ask to speak to one of our GDPR Experts.