Comment: Combating cyber crime with protective monitoring

Brewer says effective SIEM tools should provide centralised log management, event correlation, trending, analysis, policy compliance dashboard and a reporting engine
Brewer says effective SIEM tools should provide centralised log management, event correlation, trending, analysis, policy compliance dashboard and a reporting engine
Ross Brewer, LogRhythm
Ross Brewer, LogRhythm

It doesn’t take a security professional to realise that the recent data leak from the US Military relating to activities in Afghanistan would have tremendous repercussions – from the soldiers deployed on the ground to the President of the United States himself.

The breach came only two weeks after a former M16 computer agent was convicted of stealing and putting top secret information up for sale. Daniel Houghton copied more than 7000 files while working at the headquarters of the Secret Intelligence Service between September 2007 and May 2009. Over this period, he stored the information on DVDs, CDs and memory sticks and hid them under his bed before attempting to become a double agent.

If two of the world’s most secure organisations, with highly sophisticated technology at their disposal, have become victims of a data breach, what hope is there for other commercial and public sector departments?

Deliberate malicious behaviour from a minority of employees is nothing new, but as these examples show, it only takes one weak link to break the chain.

Speaking in March 2010, Lord West of Spithead, parliamentary under-secretary for security and counter-terrorism, warned that in the past year, Britain suffered 300 significant attacks on government computer systems in attempts – successful or not – to steal data or sabotage systems.

Unlike countries such as the US and Germany, the UK doesn’t yet have a data breach notification law, so there’s no telling what the actual figure for data loss is – suffice to say, it’s likely to be considerably higher than 300, as many private businesses choose to keep such knowledge undisclosed.

But the repercussions of a data breach means that the issue can’t be ignored.

For central and local government, fire, police, health or education authorities who hold vast quantities of data – from DNA databases to children at risk records – a data leak could impact public safety or trigger a mauling in the media. For private enterprise, it could generate bad publicity, and severely damage a company’s brand, reputation and bottom line.

In both instances, there’s the added risk of a fine of up to £500 000 from the Information Commissioner's Office, not to mention the chaos a leak would cause internally on day-to-day activity as forensic investigations are carried out and damage limitation begins.

Securing systems through greater network visibility and improved insight into user behaviour is a basic and intrinsic requirement of every organisation’s IT policy. There’s no excuse for not having the right security processes in place to prevent information from slipping through the net – either accidentally or maliciously.

But what processes should these be and how far should you take them? Traditional protective technologies such as firewalls, intrusion detection systems and anti-virus solutions can soon become obsolete, posing a serious security risk as IT infrastructures evolve over time, hackers develop new means of infiltrating systems, or authorised users perform unauthorised actions.

Recognising this, CESG, the UK Government's National Technical Authority for Information Assurance, has developed its Good Practice Guide 13 (GPG 13) Protective Monitoring framework to advise organisations on how to monitor exactly what is going on with their IT systems in a consistent, efficient and effective manner.

While GPG 13 is specifically aimed at the public sector, the principles behind it are just as applicable to private enterprise – as evidenced by the number of high-profile data breach stories that continue to appear in the media.

As most organisations (both private and public) are already tackling various compliance initiatives, GPG 13 may be seen by many as an unwelcome distraction – especially at a time of budget austerity. In fact, the opposite is true, as savvy organisations can use the Guidelines to support other aspects of their security infrastructure.

GPG 13 combines a number of roles, including enterprise monitoring, serving as a definition of scope for relevancy and effective deployment of monitoring technology, and as a standard for measuring the quality of organisational security information and event management (SIEM). As such, GPG 13 may be used as a best practice standard by any organisation that needs to monitor its network resources and improve auditing, accounting and monitoring processes.

GPG 13 comprises 12 Protective Monitoring Controls (PMC), each describing specific organisational requirements for monitoring. These include: accurate time in logs; recording of workstation server or device status; recording of data back-up status; and alerting on critical events. Each PMC has a recording profile that measures the strength of a particular solution – Aware (medium), Deter (medium-high), Detect & Resist (high) and Defend (very high).

Essentially, GPG 13 drives organisations to know exactly what’s happening on their network, systems and applications and to be alerted in real time if anything untoward occurs. On the face of it, this is easier said than done. Today’s IT departments typically operate vast numbers of different applications – each one generating complex reams of log data pertaining to that day’s activity. Gathering together and analysing the log data in the first place can be a monumental task, let alone doing it in real time.

Realistically, protective monitoring can only be efficiently and effectively achieved by using automated tools. Some sophisticated integrated SIEM solutions can instantaneously translate the inconsistent and obscure ‘technical data’ produced by infrastructure, database and applications into consistent ‘ISO and GPG audit or business language’ so that it can be easily interpreted and more readily used to satisfy protective monitoring requirements.

Implementing such tools should provide centralised log management, event correlation, trending, analysis, policy compliance dashboard and a reporting engine. As well as auditing the behaviour of all users, whether they are privileged or non-privileged, the solution can be used to raise alerts for security incidents and enable efficient prioritisation, investigation and response.

Such solutions don’t just benefit GPG 13. Drill down into the requirements of many other regulations – from PCI DSS compliance to Data Protection Act and GCSx/CoCo and you’ll find the need to monitor and report on network activity is a recurring theme.

Integrated log management and SIEM tools not only control budgets by ticking multiple compliance boxes, they have a dramatic positive effect on the entire IT estate – from providing better network visibility and reducing the complexity of monitoring heterogeneous IT infrastructures to improving the overall security posture of the network and simplifying IT forensics.

With all of this in mind, GPG 13 should not be seen as another distraction. Instead, it is exactly what it says on the tin – a good practice guide that will benefit all organisations, in a multitude of ways.

Ross Brewer is the vice president and managing director EMEA & APAC for LogRhythm. Brewer has over 22 years of sales and management experience in high tech and information security. Prior to joining LogRhythm, he was a senior executive at LogLogic where he served as vice president and managing director EMEA. Brewer has held senior management and sales positions in Europe for systems and security management vendor NetIQ and security vendor PentaSafe (acquired by NetIQ). He was also responsible for launching Symantec’s New Zealand operations.

What’s hot on Infosecurity Magazine?