Techfellow Security Blog logo

 

 

 

 

Artificial Intelligence in healthcare, an intelligent way forward?

 

Dr Wendy Ng, CISSP, CCNP; 15th May 2018

 

We may be a victim of our own success with regards to healthcare. Improved scientific understanding and ever-diligent, we are extremely proficient in the treatment and management of acute conditions, including infectious diseases. However, whilst this success has increased life expectancy, the perfect storm of an ageing population, sedentary lifestyles and a diet of processed foods resulted in increased incidents of chronic health conditions, including heart disease and cancer. Unlike acute conditions, these often require care and management.

This takes place in the midst of severe financial constraints in the NHS, in the grip of staff shortages, with as many as one in 12 positions currently unfilled. Technology has played a part in significant productivity gains in other disciplines with the right expertise; it should be an integral part of the solution in the future of the NHS.

Early in the decade, I had the privilege of working with a wonderful mathematician on the use of artificial neural networks algorithms to automatically qualify the suitability of candidate protein crystals for structural analysis. The computational technique is now more commonly known as “deep learning”, a key underpinning of artificial intelligence (AI) systems. Similar to biological neural systems, a deep learning system is able to learn from past ‘experiences’, or training data, to make the appropriate abstraction and recognition when presented with a new set of information. Unlike humans, computer algorithms are more scalable. Since then, AI has matured and is proving itself to be a practical tool in clinical medicine.

The delivery of modern healthcare is intrinsically entwined with technology; an especially close partnership between clinicians and technology can be found in the field of diagnostics. The diagnosis can be tricky as patients often present a plethora of symptoms; experience is key, whereby the clinicians rely heavily on their experience of past presentations of similar sets of symptoms. The difficulty is that the numbers of such experienced clinicians do not match the demand. Massive scale cloning of suitable medical professionals, including their memory banks of experiences, is still not feasible in today’s world to match the real demand.

The NHS spends £2.2 billion a year on pathology diagnostics alone, excluding spends on inappropriate treatments; a solution which can reduce this figure would support a system under considerable strain. Indeed, within the past year, AI systems have demonstrated effectiveness at the diagnosis of breast, colorectal, lung and skin cancers. Separately, researchers at the John Radcliffe Hospital in Oxford have developed an AI system which has shown to be far superior to cardiologists at diagnosing heart disease, the misdiagnosis of which alone is estimated to cost the NHS £600 million a year, half of which could be saved by the AI system.

In addition to financial savings, AI systems could also relieve time pressures on clinicians, so that they can devote more actual patient care. A positive attitude from clinical professionals is known to affect patient outcomes; indeed, it is standard clinical practice to present even dire results in a positive manner so that patients can be in a resilient frame of mind. By the same token, if clinicians are stretched and stressed, the prognosis of their patients could be impacted. Within a system which is clearly stretched and under constraint, artificial intelligence could be just the medicine that the NHS needs. I have no doubt that AI technology will permeate into healthcare within the next five years. Could a solution borne out of necessity allow the beloved NHS become a pioneer of AI in healthcare?

 

Previous Articles:

 

 

 

 

 

“Data is the new gold” the oft-quoted phrase used to describe the importance of data in the 21stcentury. The origin of this specific quote is still unclear; however, the sentiment was first expressed by the British Mathematician, Clive Humby, in “Data is the new oil”. He and his wife, Edwina Dunn, were responsible for Tesco’s Clubcard, one of the first successful big data applications that transformed our shopping habits.

Data per se is not new, the difference is our ability to mine, analyse and gather intelligence from them en masse with algorithms run on powerful computers. When aggregated, the data, or data with context, results in information which is worth more than the sum of the individual parts, in fact, a lot more. The correlation between the volume of data and intelligence from them can be exponential. The bigger the dataset, the more useful it would be, which evidently presented temptations. This is reminiscent of the actual gold rush in California in the 19thcentury with very limited regulation on ownership, with evidence of wild west behaviour. Although regulatory controls did mature, not all stakeholders benefited equally from the opportunities. Nevertheless, the gold rush did have tangible benefits, with significant infrastructure developments throughout California and San Francisco’s population to grow 180-fold in six years.

This same pattern of behaviour is apparently observed in the 21stcentury gold, with the revelations of apparent data misuse. Understandably, there has been outrage on the turn of events with vocal calls of additional legislative controls on data use. Could this be a watershed moment for data, and specifically, for their regulations? Whilst baseline controls are required, regulators often provide stakeholders with a certain degree of autonomy, which is a necessity for innovation and progress. Once the dust settles, it is hoped that organisations will exhibit self-regulation, which should also allow legislators to show restraint. Data, like gold in the 19thcentury, has proved to be incredibly valuable, and it has brought not just economic wealth, but tremendous knowledge. To allow this to continue, hopefully, the wild west revelations of late could be circumvented.

 

Previous Articles:

 

The bugs bite back

Dr Wendy Ng, CISSP, CCNP; 14th March 2018

 

 

Ransomware attacks, the most notable of recent times being WannaCry and NotPetya, have barely been out of the news. WannaCry affected IT systems from over 150 countries. NotPeyta resulted in significant impacts for enterprises around the globe, although infections started in Ukraine. Although we have yet to experience attacks of the same scale, beneath the surface, an electronic evolutionary process is brewing. For WannaCry, in the immediate aftermath, the second wave of attack was prevented by Matthieu Suiche of Comae Technologies. Nevertheless, the industry has widely predicted that it will only be a matter of time before new variants of the original malware will make a comeback.

British businesses are subjected to 38 new attacks a day, a statistic revealed by SonicWall, a company that specialises in content control and network security. Perhaps more worryingly, these often involve modified variants of vectors which had previously launched successful attacks, including WannaCry. Whilst the original variants have been mollified, the new ‘strains’ use the same core attack mechanisms, but have also ‘acquired’ adaptions to techniques from defenders, thus becoming a ‘super’ strain of the original malware.

This is a mirror of the evolutionary process in biological bugs. Persistent antibiotic use, both in the clinic and in agriculture provided the perfect incubating conditions for antibiotic-resistant bacteria, sometimes multiple classes of antibiotics, the most notable being MRSA (methicillin-resistant Staphylococcus aureus). Once infection from antibiotics resistant variants takes hold, these are far more difficult, perhaps impossible to treat. Where treatments are available, these will require ‘last resort’ options, which are typically accompanied by significant side-effects and are difficult to administer. In healthcare, more emphasis has been placed on prevention. In the example of MRSA, personal hygiene, including frequent hand washing has become a central control. Where infection has occurred, early detection and mitigation will limit damage and contain further spread. In the case of electronic bugs, including ransomware, the same principles will place enterprises in good stead. Instead of personal hygiene, enterprises should exercise ‘cyber’ hygiene, which will hopefully prevent those bugs from taking a second bite.

 

Previous Articles:

Dinner at the Arctic Circle

Dr Wendy Ng, CISSP, CCNP; 7th March 2018

 

Despite the red weather warning from Storm Emma, a group of cybersecurity leaders and start-ups gathered for a roundtable evening amidst the snowy scenes. It’s safe to say that there has always been a degree of healthy friction, both culturally and operationally, between large organisations and start-ups and the two don’t always see eye-to-eye.

However, encouraged by an evening of fine food and wine as well as lively debate led by Stephen Bonner (Cyber Risk Partner), we sat down for what turned out to be a long evening which ended just shy of the time our carriages turning to pumpkins. In addition to the food, we were there to chew and digest two over-arching topics in the security industry and the intersection between them for large organisations and start-ups.

  1. Users: How can users be empowered from the perspective of large organisations and start-ups?
  2. Innovation: How can large organisations take advantage of the innovation and pace of start-ups; how can start-ups engage effectively with large clients?

Starting from these two topics, the evening’s discussion centred around six themes:

  1. The best way to engage users through technology.
  2. How to objectively assess the array of technologies which do the same or similar things.
  3. To be more successful, start-ups could use their agility to better understand clients’ businesses and their drivers.
  4. Tools from start-ups should work seamlessly with a client’s existing infrastructure investments.
  5. How to keep abreast of security frameworks and standards and how these can be implemented with technology.
  6. AOB.

Theme 1: All three start-ups who attended the evening stated that their products aim to assist – with machine learning algorithms – rather than inhibit user activities. Their goals are to protect users from doing potentially harmful things, as one start-up stated, their aim is to teach users how to ‘lock their doors’.

Theme 2: There is a wide selection of products that perform the same functions, and four of security leaders at the table emphasised the importance of identifying the right products objectively. In an environment awash with products, even leaders in the field are sometimes left scratching their heads.

Theme 3: Five of the security leaders provided the following advice to start-ups: to be successful, start-ups need to understand their clients’ business and drivers, as well as show a willingness to be part of a bigger mission. One of the security leaders, who has just formed a new partnership with a start-up, described the security environment as a puzzle with holes, which are used by adversaries. If start-ups can fill these holes and grow, they will get buy-in from enterprises.

Theme 4: Two security leaders, from two different industries, emphasised the importance of product compatibility with the existing infrastructure investments in large organisations. In their view, this is far more important to them than a start-up’s ability to scale.

Theme 5: Two security leaders from highly regulated industries raised the issue of governance, risk management and compliance (GRC) and how to keep abreast of an increasing array of frameworks and standards. This is an issue for large organisations, and specifically with how to find technologies to assist with implementation, to ensure organisations are compliant with local and international jurisdictions.

Theme 6: The best accolades for start-ups come from their clients and investors; in fact, the clients are the best investors and revenue is key. Start-ups need to clearly and succinctly describe their products and how they solve pertinent issues for large organisations and then deliver. The most successful companies have products that quickly deliver quantifiable benefits. Additionally, for buy-in, start-ups also need to be able to present the vision for their companies to a managing board.

Despite the arctic conditions outside, the temperature at the roundtable was decidedly warmer, fuelled by food and wine, as everyone came together for a lively debate. Participants were pleasantly surprised that everyone around the table felt at ease and willing to share experiences so freely. This was made possible by the openness and wisdom attendees kindly shared during the evening, and we hope everyone had a jolly good time as well. Whilst the turnout was good with representations of leaders from different industries, the weather was not kind and the resulting transport disruptions meant that several leaders were unable to attend. However, all of those who could not make the dinner at the Arctic Circle expressed interest in future events, so perhaps part two, a post-thaw dinner?

To our participants at the event, we are extremely grateful you took time out of your diaries to attend, especially given the weather conditions on the day.

 

Previous Articles:

Finance in the cloud – the shipping forecast

Dr Wendy Ng, CISSP, CCNP; 26th February 2018

 

 

 

 

With a great many businesses now adopting a ‘cloud first’ IT strategy, worldwide spending on cloud is expected to more than double in the next 3 years, reaching $7.2 billion by 2021. In the financial services industry alone, IDC estimates spending has reached $3.2 billion in 2017 and will reach $4.2 billion by 2018, an increase of over 23%.

Clearly, there has been consistent and sustained growth in demand for outsourced infrastructure, platforms and services. This is akin to the mainframe-based, client-server service model of a few decades ago, which grew consistently only to eventually fall out of favour, partly due to the availability of ever-more-powerful end-user workstations, but potentially also as a result of cultural drift which emphasised data ownership, promoting local, often on-premise processing. In the “cloud” model, actual provisioning of processing resources could be continents away (albeit the potential impacts of network latency, in particular, would be improved if locations of utilisation and provisioning were closer together). The fact that IT resources do not need to be provisioned locally can allow greater flexibility in the physical locations of organisations, especially those which have significant computer processing requirements. This distributed computing model is partly further enabled by technological improvements such as the ever-increasing speed, reliability and geographic spread of Internet connectivity.

There are obvious advantages to a ‘cloud first’ IT strategy; perhaps primarily the shorter timescales to fulfil resource requirements without potentially long and protracted procurement cycles. Once a vendor has been identified and an acceptable Service Agreement ironed out, the speed at which resource-dependent services can be deployed is almost always faster than sourcing and building on-premise infrastructure. This is important in the finance sector, where each organisation is looking to react, rapidly and efficiently, to competitors’ new service offerings – and ideally out-compete them through innovation leveraging the largest cloud providers’ latest technology offers. In what is a form of the “Red Queens Race” from Lewis Carroll’s “Through the Looking-Glass”: “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” Indeed, Forrester had previously reported that Software-as-a-Service (SaaS) was the main driver of cloud adoptions which aim to speed up digital transformations within the banking sector.

The innate elasticity of the cloud allows organisations to respond rapidly to changing business needs – new requirements can be fulfilled in minutes. By the same token, services or resources can be ‘spun down’ almost instantaneously when superfluous to current needs. Not having to manage a large physical infrastructure also allows organisations to be more nimble and adaptable to change. A possible analogy is an oil giant scrapping supertankers and replacing them with an outsourced service of a continuous stream of much smaller vessels (maybe speedboats) that are much cheaper to produce individually, with each being used to transport oil “on demand”. A vessel that is not used by oil merchant can be used in short order by another, or not at all. The service provider carries the risk of infrastructure not being used (and prices the service accordingly), the oil merchant can react more quickly to changes in demand. In an era of unrelenting change, the ability to synch up with the prevailing trend, be it competition-, consumer- or regulatory-driven, is crucial to the organisation’s long-term survival, particularly in the fast-moving environment of financial services. As we get more speedboats, it looks like the Thames Estuary just became far more accommodating!

 

Previous Articles:

 

 

GDPR – Getting your Ducks on Parade for Regulation

Dr Wendy Ng, CISSP, CCNP; 26th January 2018

 

 

For any organisation, it is a numbers game; GDPR, the most important new regulation in decades, will certainly give them a run for their money! For service providers, the regulation could have a financial and reputational impact on their clients and associated legal ramifications. The oft-quoted potential fines of, four percent of global turnover or €20,000,000 whichever is greater, grab the headlines (and the attention of the Board) but the fines are only part of the possible additional costs brought about by the regulation. To be compliant, many organisations will require structural changes across the entire data lifecycle for necessary levels of guardianship needed to ensure that data are handled in compliance with the Regulation. For organisations which routinely handle large quantities of sensitive personal data, the appointment of Data Protection Officers (DPOs) is compulsory. Arguably, this would encompass most modern businesses and organisations, but is particularly relevant for finance and insurance industries, law firms and healthcare providers.

GDPR is concerned with handling, processing and storage of Personal Data of EU residents. The regulation is applicable to all organisations that offer products and services to EU residents regardless of an organisation’s legal or physical location. As the name implies, Personal Data is information about an individual, which identifies them, could allow them to be impersonated, or which describes them in the widest sense; this could include data about status, preferences, behaviour, location or movements, and also includes biometric data of all forms, and even opinions about the individual from other people. Ultimately the objective of GDPR is to enforce the protection of every individual’s right to privacy of their data. The regulation does not apply to pseudorandomised data, for example, patient responses to new drugs in clinical trials, or to meta-data, i.e. intelligence gained from the original data. Clearly, not all of an organisation’s data will be affected by GDPR, and those which are impacted by the regulation is likely to have higher costs for their lifecycle management, thus the first step for any organisation is to identify and classify their data. Accurate classification of data and continued maintenance to ensure that this inventory is up-to-date will not only provide greater visibility to the business but will reduce the overall overheads on data management with regards to GDPR.

A key goal for GDPR is to ensure that organisations are incentivised to establish: effective accountability, including ensuring systems and processes that are designed with privacy in mind. A program of Privacy Impact Assessments if risky or large quantities of personal data is to be processed, explicit consent is required from the originator before their information is used; processes for complying with data subject requests for transfer, rectification, or deletion of their personal data; breach-management procedures including prompt notification to supervisory authorities (within 72 hours of discovery), and if the breach is deemed as presenting high risk, to the data subjects also; and finally the regulation of data processor organisations.

If we get past the actual numbers and the fact the regulation is new, the actions required are simply ‘good practice’ in terms of data management. However, two 2017 studies, by Gartner and Veritas, suggest that most organisations are not quite there yet. Although data is extremely important to organisations, described as the 21st-century gold, many businesses still lack a structured strategy, which is becoming more pertinent as their volume continues to increase, making it simply impossible to maintain everything to the same standard. In this environment, could a regimented prescription by GDPR be just the medicine required for organisations to align their ducks and gain visibility of their data, with the added possibility of gaining even greater intelligence and insight from them?

 

Previous Articles:

Snow White and the seven cryptocurrency miners

 

 

Dr Wendy Ng, CISSP, CCNP; 18th December 2017

 

Hi Ho-oo-oo! Hi Ho-oo-oo! Hi ho, hi ho, it’s off to work in the Data Centre we go! Deep in the magic forest there sits a large, anonymous-looking grey shed; inside, seven Data Wizards And Regional Facility Staff (D.W.A.R.F.S.) are hard at work loading new servers (they call them “compute nuggets”) into equipment racks and tending the cooling and fire suppressions systems, and uninterruptible power supplies, that are essential safety and backup systems in any data centre (or mine). These seven D.W.A.R.F.S. used to be expert geologists, hacking deep into the rocks in cramped tunnels looking for lustrous gold deposits, but have now changed careers toward information technology in search of their hoped-for riches instead; they are now mining and trading cryptocurrencies such as Bitcoin and Ethereum, hallmarking new blocks on the blockchain (and gaining new crypto coins) as they go.

Cryptocurrencies are indeed big business, especially within the context of their rocketing, albeit volatile value. Indeed, Digiconomist estimates that the mining of just Bitcoin alone uses more energy, at 32.36 TWh, than consumed by Ireland – or about the same as Serbia. Our D.W.A.R.F.S. have been extremely industrious. Their haul is often traded in exchanges, which inevitably have become prime targets for cybercriminals (let’s call them the “Wicked Queens”) who, through various heists, have stolen an estimated $1bn (Crytovest) to $15bn (Fast Company) of cryptocurrencies.

The most recent incidents involved the theft of 4,700 bitcoins (worth around $80 million at current prices) stolen from the cyptocurrency mining marketplace, NiceHash and a freeze of Ethereum wallets worth $170 million as a result of a “security test gone wrong”. Technologically, a cryptocurrency system consists of a large (and growing) distributed database or ledger recording transactions (often anonymously) amongst cryptocoin users (including Wicked Queens), with blocks of transactions being verified cryptographically by miners (our D.W.A.R.F.S. and others) who gain additional cryptocoins for their efforts. There is no specific guidance on the security controls that our D.W.A.R.F.S. should implement to protect the results of their hard work in their Cryptocurrency system and Exchange, however, our guardian, Snow White, can help. The first step to protection is to apply the principles outlined in the recognised best-practice security frameworks, already widely-used across more mature enterprises, including the ISO 27000 and NIST 800 series. Software, of course, is fundamental to cryptocurrency systems and exchanges, and software flaws in these have been used in some recent attacks; a software flaw may have been the ultimate cause of the Ethereum wallet lockout, and the NiceHash breach may have been possible due to flaws on the that marketplace’s website.

As in most organisations, software developers in cryptocurrency marketplaces are likely to be under constant pressure to release the latest features; too often, software is developed for functionality, with security as an afterthought, so sadly it’s not surprising that flawed software is a primary route of attack. But, leaving security to be an afterthought, perhaps to be dealt with after security tests during the QA phase, can delay release schedules as security retrofitting can be more time-consuming than building it in in the first place – and may even be less-effective (and secure); the end result is an easier nut to crack targets for attackers. So, secure application design and secure coding practices should be incorporated into e software development lifecycle from the outset and, where possible, inputs from threat analysis incorporated into the design phase (perhaps as “abuser stories” in an agile SDLC), as well as “white box” penetration testing – peer code review, and consider automated static code analysis. After release, continued proactive defence strategies including vulnerability monitoring, red teaming exercises as well as user education – after all, 19,000 bitcoins were stolen from the exchange Bitmap in 2015 through a simple phishing attack – will reduce the attack surfaces at the Wicked Queen’s disposal. As the cryptocurrency mining and trading efforts of our D.W.A.R.F.S continue to expand, they will become more attractive to attack by those Wicked Queens; Snow White will be extremely busy indeed!

 

Previous Articles:

Lost in the woods – tales of wandering data

 

Dr Wendy Ng, CISSP, CCNP; 30th November 2017

 

In an age of hyper-connectivity, it is almost refreshing to see that not all data breaches result from the compromise of networked devices. One of the most recent incidents is the loss of data from 5000 members of an exclusive club of Oxbridge graduates, including names, addresses, phone number and bank details. The information, which it is believed was not encrypted, was stored on a ‘back-up’ hard drive that has gone missing from a locked room within the club’s London HQ. This is the latest in a series of data losses resulting from physical storage media that has “gone missing”, including data from 46,000 Zurich clients lost from tape storage in South Africa and data from 25 million UK child benefit recipients (which included names and birth dates of the children of the recipients), stored on CD-ROM discs that were lost in transit from the UK HMRC to the Government audit office. In the Zurich incident, the company was fined $3.5 million for failings in data security by FSA (now FCA).

These incidents show that the traditional physical data storage practices still have the potential to lead legal liability and to damage to an organisation’s reputation. Although for many of us, the latest technology has inherent appeal, most data breaches result from a lack of basic cyber ‘hygiene’ and secure data lifecycle processes, including where and how data are stored. Any physical media containing sensitive data should be stored in a secure manner, and handled with pre-defined processes, including restricting physical access to the media on a needs-only basis. For the actual data, at rest, sensitive data should always be encrypted, and the decryption keys stored securely. An up-to-date inventory of in-use, backed-up and archived data, including their location and who has access to them, should always be maintained.

Whilst as standalone activities, these actions do not appear to be particularly onerous, they must be considered in the context of a modern corporation: with frequent regulatory and organisational changes (including from mergers, acquisitions and divestments), as well as staff churn, it is no wonder organisations sometimes lose track of data under their ownership or control. To succeed, dedicated information security resources must be able to integrate these processes as seamlessly as possible into the organisation’s standard operating procedures (SOPs). Without this, experience has shown that staff would start losing sight of the wood for the trees, and creative humans may find ‘shortcuts’ which could at least partially contribute to tales of wandering data. With GDPR around the corner, such each tale of lost data is guaranteed to be far more expensive than it has been to-date.

 

Previous Articles:

Towards a secure mindset

 

Dr Wendy Ng, CISSP, CCNP; 11th October 2017

 

The gatherings of the hacker community at Black Hat and Def Con have provided opportunities to survey professionals with their eyes and ears closest to the security threat coalface. Recent attacks and breaches have seen a shift from targeting information systems infrastructure, to applications and individuals within the organisations, both of which in turn will have legitimate access rights to corporate intellectual property and too sensitive client and consumer data. Indeed, surveys by both Blackhat.com and Tycotic found that social engineering attacks targeted at individuals within organisations are seen as the biggest threat to the security of corporate information stores. For applications, there are two key mitigation strategies: both direct protection (ensure the application is not vulnerable to known attacks through secure design, timely patching of identified security vulnerabilities, upgrades to more secure versions, and secure configurations) and indirect protection (using tools such as host/network integrity monitoring and application-aware firewalls) can be used to protect critical applications (and their data) against targeted attacks.

Notwithstanding possible ethical issues, the human equivalents of the direct protection strategies of patching and upgrade are difficult to implement, at least to scale! This leaves us with the final option: of secure configuration. Can we apply a secure configuration to humans? Most organisations implement the first steps towards this, starting with raising security awareness through training sessions; maybe these are followed up with evaluation tests and subsequent regular reminders, through on-screen – logon screens and savers & posters. These elements at least give employees or other associates of an organisation some foundation in security responsibilities – security 101 for employees, if you like.

But, whilst cybersecurity has become a business requirement, and staff security awareness training and reminders are now considered a mandatory part of a good cybersecurity posture, experience has shown that simply presenting employees with periodic eLearning materials is not enough. To effectively mitigate against the single biggest cybersecurity weakness in many of our organisations, we need to make that configuration change – we need to need to help our users develop a secure (maybe even “security first”) mindset.

As an analogy, most of us would lock the external doors of our homes, and we’d not invite total strangers into our homes and give them the freedom to roam. Yet, many of us will merrily go about our day doing the technology equivalent of these on our corporate and personal assets. However, until relatively recently in historical terms, people also did not routinely lock their front doors. What ensued was a mindset change in response to perceived and real changes in local and regional threats. Ever since we were children, we have been told to lock our doors when we leave our homes. This advice is initially shared by our immediate families and primary careers and subsequently enforced by the wider social community. In time, when we go out, securing our homes becomes second nature. This approach of initial guidance followed by regular enforcement changed our behaviour on how we secure our homes. Although for most of us, by the time we join a corporation, are no longer children, the same guided education strategy with frequent gentle reinforcements can and should change mindsets and provide enhanced protection against what is currently the biggest single threat to an organisation, its users. As Abby Christopher stated eloquently, we need to implement the ‘human (sic)firewall’.

Previous Articles:

Information Security Investments for 2017

 

Dr Wendy Ng, CISSP, CCNP; 21st August 2017

 

The latest Gartner report has estimated 2017 spend on cybersecurity to be $86 billion, with a trend towards an Opex cost model with a focus on outsourcing, consulting and implementation services. The industry trend away from Capex business model, with onsite / organisation-owned hardware, appears to have spread even to security services.

Due to growth and reorganisation, many organisations have incumbent legacy IT systems, which are difficult and expensive to protect. There appears to be the widespread concerted investment in IT transformation, which presents a major opportunity to re-align company networks and implement secure architecture from the design stage, which should place defenders on a more even keel for the infrastructure.

The report has also shown attackers are moving away from launching attacks on devices and targeting individuals and software. Indeed, the latter was precisely the targets of the WannaCry and NotPetya ransomware. Both attacks pose questions for traditional defence strategies which rely heavily on known signatures of illegitimate processes. To guard against these, cyber-defenders will require technologies that detect irregularities based on heuristic and self-educating “machine-learning” type algorithms, and recent cyber-attacks will likely to spur spending on these technologies.

Another driver for cybersecurity spend over the next year will be the forthcoming EU GDPR. For any organisation with a presence in, or working with data from citizens of the European Union, uncontrolled loss of data containing personal data (including profiling) will be subjected to fines of 20 million euros or 4% of annual global turnover, depending on the nature and scale of the transgression. With organisations operating to a “better safe than sorry” mentality, potential financial penalties of this scale will drive spend, specifically for Data Loss Prevention solutions and information classification services. However, these directed investments should reap rewards, plucking the low- hanging fruit and increasing the resource requirements for attackers. 2017 could be the year that defenders fight back.

 

Previous Articles:

Defenders and attackers – economic asymmetry in cybersecurity

 

Dr Wendy Ng, CISSP, CCNP; 31st July 2017

 

Even before they start, the odds are stacked against the defenders. Modern infrastructures must support a large number of internal and external connections, to support business objectives such as partner interconnectivity, mobile workforce efficiency and productivity and the interaction with the client base. But, this increased connectivity significantly broadens the organisation’s attack surface. Furthermore, many of these connections are from “the Internet of Things”, typically light-weight, feature-rich devices which are, as we have seen in recent news stories, often not designed with security in mind – and thus easy targets, in many cases being recruited in botnets. However, one constant in the defender/attacker dynamic is that it always costs significantly more for the defender in this interaction. Netswitch estimates that launching attacks cost $1.2 billion in 2015, whilst the cost of defence plus costs associated with breaches was £395 billion. In this defender/attacker arms race, the defenders’ costs are almost four-fold higher than the attackers. As long as this cost asymmetry exists between attackers and defenders, the impetus for further cyber attacks will continue.

Despite a large number of high profile ransomware attacks, the 2017 global review by IBM and Ponemon shows that the costs of data breach events on average reduced by 10% over the last year. This is accompanied by a significant reduction in the costs associated with lost or damaged records with confidential information, from $151 to $141 per record. Thus, the cyber-defence community is successfully reducing their cost base. Actions shown to contribute to reducing the costs of cyber breaches include encryption of records, threat information sharing, use of security analytics, well-designed incident response, staff training, and well-designed business continuity management.

The effects of the recent Wannacry and NotPetya attacks have demonstrated that a proper patch management policy can significantly reduce the organisational attack surface. Technology operations teams have their work cut out, though; company growth, mergers and acquisitions, spin-offs, & re-organisations mean that the technology teams may not have complete or up-to-date information on their corporate estate, which could include legacy systems which are expensive and often difficult to protect. Information technology estates are, however, undergoing a major transformation, with new technology developments and migrations of services to virtualised computing environments, whether on-premise or in the cloud. These transformations will re-align the technology operations estates, and enable more timely patch/update testing & roll-out cycles, and so providing prompter protection against the likes of Petya and its ilk. “Zero-day” or unpublished vulnerability information, whether in the hands of cybercriminals or nation-state actors, remains the critical area of weakness for every organisation – a known unknown. Thus timely response to publication of new security threat and vulnerability information, mitigation guidance and (when released) software updates and should be considered a key business objective (not just technology objective) of all organisations.

Given the effectiveness of offence for attackers, there is increasing debate on its inclusion in the defender’s arsenal. Deploying an attacker tactic in defence could reduce the cost asymmetry between attackers and defenders. Additionally, the threat of cyber or kinetic retaliation (the latter a strategy used as a key tenet of military defence strategies and campaigns), could provide an effective deterrent to cyber-attack. However, care must be taken with cyber retaliation; such action may only take out an intermediate staging point such as some compromised servers in another country from either attacker or defender. Indeed, the effect of such “misinformation” is exactly the same for cyber retaliation as for kinetic retaliation – imagine the collateral damage caused by drone-based guided missile attack against the location of a physical aggressor, who is found to be using “human shields” to increase the cost of any physical retaliation to their actions. Furthermore, an aggressive reaction against cyber attackers could lead to an escalation in the attack intensity – perhaps to a major DDoS attack from what was “only” previously a network penetration and data extrusion attack. On the other hand, given the scale of the issue, modifications to our current defence strategy may be necessary to address, and re-balance, the micro and macroeconomic asymmetry in the attacker-defender interaction.

 

Previous Articles:

Cloud computer

How many clouds do you need?

Dr Wendy Ng, CISSP, CCNP; 15th June 2017

 

Traditionally, enterprise-owned data centres were the predominant provider of an organisation’s services. These can be considered to be “private clouds”, the hardware and services and their security being under the organisation’s control. On the other side of the spectrum, resources and services can be provided via a third party provider – as a ‘public cloud’ – which are accessed by multiple organisations. Without the need to source and implement new physical systems, or (in many cases) application platforms, organisations can gain significant time savings in the system development lifecycle by deploying to public clouds.

Many organisations have been using ‘hybrid clouds’ – a combination of private and public cloud – for their service and infrastructure requirements, leveraging the synergies of risk and cost of the two solutions. The main advantage of the public cloud is its inherent flexibility and elasticity. For many small and medium-sized enterprises (SMEs), where the rate of return on investment (ROI) for infrastructure is particularly important, a “public cloud first” policy has been widely adopted. Similarly, using public cloud provision services is particularly attractive for enterprises with large fluctuations in resource demand, a prime example being online retailers. However, modern enterprises are dynamic environments and the drive for innovation and flexibility has meant that even large and highly regulated industries have public cloud-based platforms. Indeed, public cloud adoption has been growing rapidly, at a rate estimated to be 18% for 2017 by Gartner. Between 2015 and 2020, IDC forecasts, cloud spending will grow at seven times faster than the rate of overall IT spend.

Organisations migrating services to public clouds typically start with the least business-critical services, to “try out the waters”, and to limit business and, in particular, compliance risk in the early phases of adoption. With increased acceptance and continued innovation, and with its core advantage of flexibility and elasticity, more services are likely to source from the cloud, including those with potentially bigger business impact – and risk. However, sourcing services from the cloud are likely to have other operational impacts, including business continuity planning (BCP) and disaster recovery (DR) strategies. This is especially true for cloud resources which support other services, a prime example being the infrastructure. Unlike business continuity plans for IT services owned and run by the organisation, there is typically a more limited choice for physical locations, hardware and platforms which support the underlying cloud-based services. Whilst cloud service providers will have BCP, for critical services with high business impact on a public cloud first strategy, implementing BCP from a different cloud service provider can help to further mitigate risks in the event of an outage. In addition to potential increases in costs, however, using a diversity of cloud providers as part of the BCP will have implications for software architects. For the applications involved, they must have the same functionality on different infrastructure platforms. For cloud-first strategies, how many clouds do you need? The answer can often depend on the risk and impact of a loss of service.

 

Previous Articles:

IAM in Healthcare

 

Dr Wendy Ng, CISSP, CCNP; 22nd May 2017

 

It was perhaps no coincidence that amongst the first victims of the WannaCry ransomware were NHS hospitals in the UK. In addition to the presence of specialist equipment where applications are not always compatible with the latest operating system patches, healthcare networks are designed to provide availability. As such, even sensitive data are often unencrypted, even at rest. Whilst these aids resource access by clinicians, particularly at the point of treatment, the design has been shown to be wilfully inadequate in a hyper-connected world.

A security benchmark by the Ponemon Institute showed that almost 90% of surveyed healthcare organisations experienced an information security breach. More worryingly, 45% had five or more such events. This suggests attacks often appear in successive phases. With continued digitisation, the pressure on healthcare environments will not abate.

Successful breaches will invariably affect any organisation’s reputation. Being a highly-regulated sector, confirmed data breaches could subject healthcare organisations to other legal and regulatory liabilities, including financial penalties. There is no silver bullet; a good security posture will involve data classification, layered controls, cultural change and mature breach compensating strategies. Complete protection against breaches is not a realistic expectation. However, it is possible to include additional controls for access to resources and reduce the time between a breach and its detection, thus limiting the damage.

To aid event detection, logging and network monitoring toolkits such as Security Information and Events Management (or SIEM) systems have increased visibility to event information within the organisation’s infrastructure. In the most recent attack, once inside the network, perpetrators were able to access resources and data rapidly. Thus a system which is able to detect and respond to unauthorized resource access could limit the damage. Identity and Access Management (or IAM) is precisely such a tool. Already successfully applied in other highly-regulated industries, its core strengths of centralized information analysis & automated corporate policy enforcement could have limited the effects of the most recent cyber-attacks. In environments which experience a large number of successful cyber-attacks and repeated breaches, IAM could be an important defensive tool in the armoury.

 

Previous Articles:

Big data – our saviour or just trying to find a needle in the haystack?

 

Dr Wendy Ng, CISSP, CCNP; 4th April 2017

 

Numerous publications have described using big data analytics, but there is still misunderstanding on what it is, its origins and why we need it.

The concept is not new. Statisticians and research scientists have been using the techniques for at least 150 years. Indeed, Florence Nightingale pioneered systematic data collection during the Crimea War to distil effects of unsanitary conditions in military hospitals.

Fundamentally, the goal of big data is to filter insight or ‘signals’ from the apparently-noisy background of the data itself, which can be used to define an item of interest. For example, the diagnostic features of diseases, or how to detect a hacker in your network.  Closer to home, one of the best examples of successful application of big data analytics is in the retail space, specifically an individual’s shopping habits, as demonstrated by Amazon.

Am I making a noise?

If we have all possible data for a given problem, we would not need statistics, we can simply look at the data. However, in reality, this rarely happens – typically only a subset of the data will be available. The question then becomes how representative is this subset compared to the overall sample, so how much can you trust the information? This is where big data analytics can help. The main issue is how to separate the proportion of the data that will provide insight and intelligence rather than ‘noise’. Depending on the data type, statistical noise may be artefacts in the collection process, or simply any variation in the sample that is magnified as the sample size is small. In the shopping example, how an individual shop on a day in a festive season is unlikely to represent their typical shopping pattern outside of that season. This brings us to the value of big data.

The power of big data is its size. When the sample sizes increase, the data become less likely to be skewed disproportionately by spurious artefacts. True signals will emerge from the data since these will be consistently observed. Bigger samples can also detect rare events, adding details to the dataset. Big data analytics can be extremely powerful. However, as Florence Nightingale showed, systematic collection and analysis will make it easier to coax that needle out of the haystack.

 

Previous Articles:

 

watching you

Who is trying to watch you? IAM

 

Dr Wendy Ng, CISSP, CCNP; 22nd March 2017

 

Who are you? Identity & Access Management.

Traditionally, IAM is used to authenticate system users before granting them with access to systems and resources located in the internal network domain – that is, inside the enterprise network perimeter. The user can be located inside or outside that network perimeter, and the IAM solutions can be customised to enforce stronger identity verification (that is, authentication) when the user is located outside the network perimeter. This demarcation of external and internal domains was also supported by the use of physical network segregation.

However, this boundary is increasingly blurred, driven by the Internet of Things revolution, virtualisation, mobility requirements and cloud adoption. Data and services are transiting service providers and between organisations. There are fundamental changes to end-user expectations of system availability and connectivity. System access via multiple endpoints, including personal devices not controlled by the service provider, is expected.

Information Technology must support and enhance productivity. Good architecture design and accurate data classification are key enablers, but identity and access management will come to the forefront in distributed networks, whereby the only realistic point of control is the point of data and resource access. As the technology environment changes, IAM controls must also adapt. Effective IAM solutions will need to gather additional contextual and behavioural data such as time and location of access, duration of access to provide seamless system access whilst safeguarding users and services.

The business expects and demands hyper-connectivity and ease of collaboration in modern networks. Together with the rapid adoption of virtualised cloud environments, removes one of the most potent tools in the defender’s armoury, that of physical segregation. Without the latter, the threat landscape is dynamic, when combined with attackers’ propensity to adopt technology innovations will magnify the attack surfaces and increase business risk. Innovation is critical in business – the impacts on businesses which fail to innovate are well documented. However, any change is associated with risk – the aim is to manage this risk without irreversible damage to the business itself. Increasingly, business risks have foundations in the technology estate, specifically the organisation’s network infrastructure. Given that the connectivity and innovations are essential to businesses, IAM could be a crucial enabler in the defender’s arsenal. Unfortunately, it also means the trend towards being watched more closely is unlikely to abate.

 

 

Previous Articles:

IOT3 – Internet of Things, Threats and Terrestrials

 

Dr Wendy Ng, CISSP, CCNP; 4th December 2016

 

The Internet of Things (IoT1)

From smartphones to vehicles to medical devices to networked light bulbs and smart meters, connected devices have proved to be the instrument of change towards improved productivity and user experience over the past decade. The so-called Internet of Things could also be the single most disruptive force in people lives since the introduction of machinery during the industrial revolution. Unlike the frame-breakers of the industrial revolution, however, people have embraced the Internet of Things; this is particularly true amongst the millennials, where a connected digital experience anywhere, anytime, is an expectation and directly influences their engagements with brands, products and services.

The Internet of Things, whilst undoubtedly providing tremendous opportunities, also presents new challenges, for both the consumers and the providers of products and services. The number of Internet-connected devices is expected to reach over 20 billion by 2020, with a 30% YoY increase between 2015 and 2016 according to Gartner. However, every connected ‘Thing’ could be a threat and present regulatory liabilities, cause reputational damage and affect the bottom line.

Internet of Threats (IoT2)

As the number of connected devices increases, so do attack vectors. Continued pressures from service users mean this will only increase. Clearly, simply securing the perimeter, i.e. preventing device access to a network, is no longer a viable option. This is further complicated by the rapidly moving technology landscape, for example, the adoption of cloud-based services. Modern networks will require more granular network controls, including segmentation as well as the classification for services and consumers. We are also observing greater reliance on informatics and real-time analytics of network traffic from all types of connected devices, to aid identification of unusual patterns and potential compromises, especially in highly regulated and sensitive network environments such as the banking sector.

However, no network can be completely secure – a network breach must be expected to occur at some point. The real question is if, when that breach occurs, it can be detected, closed down and any damage remediated promptly to limit the organisation’s exposure to information loss, regulatory non-compliance and reputational impact. FireEye has shown that this ‘dwell time’, or the time between compromise and detection, average 146 days globally, whilst in the EMEA is 469 days. The actual dwell time is dependent on the sophistication of the attack. Sophisticated network compromises may deploy Advanced Persistent Threats (APTs) to aid the progress of the attack, or instead, attackers may “live off the land”, a term coined by Dell SecureWorks’ Counter Threat Unit. However, both methods require active involvement by the attacker, the explicit goal being theft of an organisation’s business data and intellectual property. Connected devices with immature security implementations only widen the threat surface available to an attacker. However, many of these connected devices help to improve employee productivity so can be perceived as adding value to the organisation; this brings us to the IoT3, and the ultimate reason Cybersecurity is required.

Internet of Terrestrials (IOT3)

The ultimate audience for Internet-connected devices are Terrestrials, whilst the technology merely assists. In 2014 and 2015, many cyber-security incidents were the result of unauthorised network and system accesses enhanced by APTs. These are highly targeted, often initiating with phishing campaigns, harnessing the wealth of publically-available social media information. Such attacks are particularly troublesome to defend against since users, often insiders can legitimately access the network, increasingly via smartphone and tablets, and thus are difficult to protect against via traditional perimeter-based protections. Incidentally, in large organisations with sophisticated cyber defences, insiders are the root cause for over half of security breaches based on research by IBM and PwC. In order to combat this, defenders need to adapt. Increasingly, organisations are adopting an active defence strategy with granular access control and continuous monitoring. This strategy will deploy the raw processing power of modern computing and analytics, in combination with the unrivalled creativity of the human brain, our final T, the Internet of Terrestrials. This approach has already been adopted by high maturity cyber security environments for some of the world’s largest biggest institutions. More organisations are likely to adopt this strategy in a rapidly evolving threat and regulatory landscape.

IoT3 and Beyond

This blog provides a discussion on how technology in the form of the Internet of Things, IoT1, provides the foundation for disruptive innovation. This has contributed to a highly dynamic threat landscape in the form of the Internet of Threats, IoT2. However, the ultimate actors – and users – are the Internet of Terrestrials or IoT3. It is also the latter which will ultimately determine how readily companies will adopt innovation for a very exciting field.

 

 

Previous Articles:

Cyber Security and Healthcare, Uncomfortable Bedfellows?

 

Dr Wendy Ng, CISSP, CCNP; 30th November 2017

 

The duty of a healthcare professional is to treat the patient in front of them. Increasingly, this is assisted by and dependent on advanced technologies and IT systems. To continue to do so, the healthcare industry will need to adopt a more astute cybersecurity stance.

The first high profile cyber security incidents were in fast-moving retail, technology and finance industries. Whilst these have often significant financial and public-relations impacts, they do not directly affect the well-being of individuals. 2015 saw a dramatic shift, with the healthcare industry suffering the greatest number of cyber incidents based on data from IBM’s security services operation, alongside a trend of increasingly targeted adversaries.

IBM identified almost 10 million patient records are available for sale in the ‘dark web’. Healthcare information is sought after because its useful lifetime is longer, and contains more personally identifiable information (PII) than credit cards. This information can subsequently be used for identity, financial and tax frauds, and for extortion attempts. Since it is more difficult to change identity, healthcare data is worth significantly more than credit card information.

Attackers are now directly targeting establishments at the point of care provision. Healthcare providers were subjected to a number of ransomware attacks in 2016, resulting in IT infrastructure and digital patient records being inaccessible until ransoms were paid. Patients were reportedly turned away during the attacks, with direct health impacts and potentially fatal consequences. Unfortunately, due to the value of the data, such attacks are likely to continue. Additionally, McAfee identified that the healthcare environment relies heavily on legacy systems, allowing them to be attacked by relatively unsophisticated methods.

Even as uncomfortable bedfellows, cyber and healthcare could be in bed together for a while.