4
The power of logs Bryan K. Fite, Senior Cyber Security Consultant, BT

The Power of Logs - A Cyber Security White Paper from BT

Embed Size (px)

Citation preview

Page 1: The Power of Logs - A Cyber Security White Paper from BT

The power of logsBryan K. Fite, Senior Cyber Security Consultant, BT

Page 2: The Power of Logs - A Cyber Security White Paper from BT

The power of logs 2

IntroductionOf all the security tools available to the modern compliance officer, security professional, operations specialist or cyber analyst, none are more important and underappreciated than logs. Behind every flashy executive dashboard, detailed compliance report and effective cyber response effort there are logs. Logs come in many forms but are basically a historic collection of artifacts (alerts, events, state changes) that provide insight into what happened and when it happened within a networked computing environment. Logs are produced by almost every active component of a modern network. Often viewed as a necessary evil, effective log management can yield value every day to a myriad of internal and external stakeholders, allowing savvy organisations to transform a mandated operational ‘tax’ into competitive advantage.

Page 3: The Power of Logs - A Cyber Security White Paper from BT

Log management practices vary in value and effectiveness. Some organisations are only interested in meeting the minimum operational requirements or compliance standards, while others want to use predicative analytics to proactively identify threat agents or new business cases.

If logs are so important, why don’t organisations celebrate their significance? Simply put, there’s no such thing as an organisational ‘Log Czar’. Meaning, there’s no single business stakeholder but rather many. As such, the majority of organisations manage logs within the context of the native log source service. Router logs are managed by the router custodians. Firewall logs are managed within the firewall management systems admins. Antivirus logs are managed by the desktop team. On and on it goes.

At first glance, this approach doesn’t appear to be unreasonable. That is until there is a ‘compelling event’, which requires visibility of useful log information. We use the term ‘useful’ because in order to provide the correct utility, regardless of stakeholder or use case, logs must be available, accurate and searchable.

Now that the utility of logs has been established, let’s explore stakeholder needs, consumption preferences and desired workflow. This will help us to identify opportunities to drive more effective log retention requirements and the supporting business case. Regardless of the stakeholder’s needs, rarely do they require logs from a single network element. All too often, the normal workflow requires auditors, analytics and operators to collect individual logs from their native sources, synchronise time (base time) and correlate pivot points in order to address some business need. These collated logs are then further transformed based on the final consumer preference; email, dashboard, PDF or compliance reporting. These efforts are normally manual, (best case semi-automated), slow and prone to error. These ‘tribal’ practices rarely survive organisational changes intact.

Bringing it all together the obvious answer is to consolidate, normalize and index logs centrally. By creating a central log repository, all authorised stakeholders can consume the various normalized logs to support their specific needs, reducing manual delays and errors. While conceptually easy to understand, implementing the right log management architecture is often a missed opportunity. Technology-led approaches are often too myopic and reduce business flexibility. Not all log sources are created equal. Do you send all the data back to the central database using valuable bandwidth or store locally and forward indexed meta data? Do you need to consume real-time feeds, batch feeds or both? What about encryption and resiliency? These decisions will have lasting ramifications and impact well beyond the initial effort. It is essential to create a flexible architecture that facilitates the publishing and subscribing of authoritative logs to meet today’s needs and easily support the tools of tomorrow.

Once the utility of centralised log retention is understood, the organisation will desire to send more and more logs to

The power of logs 3

the repository. Generally speaking, the more logs the better. However, this can quickly become a Big Data problem and is often outside of the scope of the original business case. This creates the next business challenges of dealing with log volumes and the scalability required to support the extended mission. Capacity management is a discipline unto itself and beyond the scope of this discussion but should be considered.

As capacity grows, organisations will want to slice and dice the data; reporting, analytics/data mining, alerting, modeling and more. Adding this type of functionality after the fact can be very expensive and disruptive. Often, new platforms and data repositories are spun up to support each specific use case and stakeholder’s individual need. This leads to the creation of multiple log repositories, duplicate spending and increased attack surface.

To better understand the practical implications of these issues it is useful to consider some concrete use cases and stakeholder communities.

Page 4: The Power of Logs - A Cyber Security White Paper from BT

The power of logs 4

Use Case 1: Compliance

Compliance means different things to different people. Simply put compliance is an accepted obligation to conform to a specific standard - which could be an external regulatory obligation or an internal security requirement. The most costly aspects of compliance, besides the obvious fines or loss of privilege, are the hidden costs of audit support, attestation controls and reporting. This means the total cost of compliance is not well understood by most organisations. Because we pay auditors by the hour, the less time spent responding to audit requests is savings for the organisation. Auditors also react favorably when organisations can quickly produce attestation information as it demonstrates a strong maturity and professional face. Combined with effective Governance Risk and Compliance programs, organisations can become ‘audit ready’ benefiting the organisation well beyond a tick in a box.

Use Case 2: Incident response

The old adage “There is no such thing as a perfect crime” might not apply to cyberspace. Perhaps, it is more accurate to say, “The perfect crime is the one that is never detected.” Logs often provide the authoritative source of truth for incident responders charged with performing forensic investigations. In order to provide forensic utility, logs must be available, complete, easily searchable, forensically sound and ultimately admissible in court. Beyond these basic characteristics, the hallmark of ‘forensic friendly’ environments is the integration of the normal network and security controls in addition to other important log sources like servers, endpoints, DHCP, DNS, routes and authentication tracking. One way to measure the effectiveness of the tools available to incident responders would be the number of clicks (analyst interactions) required to perform a specific task. Analyst’s time is valuable; their technology should make them more effective not give them ‘portal fatigue’.

Use Case 3: OperationsEveryone in the information technology world knows operations is king. Compliance, security and performance don’t mean anything if the services aren’t available. So it should be no surprise that operations are the biggest consumer of logs in most organisations. When down time can be measured in pounds or pence, the ability to identify root cause and implementing corrective action is critical. Reliable logs are indispensable to preventing, detecting and responding to outages. The optimum state would be ‘operationally transparent’ providing effective decision support to operations without interacting with individual log sources.

Use Case 4: Cyber No organisation is immune to targeted attacks. Sometimes referred to as cyber, the defender assumes that motivated adversaries have or will develop capabilities and target them. Because traditional signature based security controls are ineffective against most classes of targeted attack, including so called APT, cyber analysts must leverage a new and diverse set of tools in response. Logs play an important part of this eco-system, from feeding anomaly detection systems, tracking indicators of compromise (IOC’s) to modeling with visual analytics. The fundamental goals are simple; improve mean time to detection and mean time to containment , making human analysts more effective.

One can see from these examples that effective log management can have tangible positive business impact. If an organisation desires to expand and enhance their current practice and capabilities where should they start? While there is no single ‘right way’, the logical first step is to perform some discovery activity to determine where the organisation resides on the log management maturity continuum. Often the discovery activity combined with clearly defined business objectives will yield obvious business cases.

With all the elements to consider it can be difficult to create a clear vision and drive meaningful change, especially in mature environments. However, for those organisations that take the challenge, reward awaits. Remember log management is not the ‘killer app’ it’s the technology that enables the next ‘killer app’. Organisations that embrace this truth will be able to transform logs into business value over and over again in perpetuity.

Discovery activity to consider:

l Who are the consumers of logs in your organisation?

l Is your organisation subject to any compliance or regulatory standards?

l How much does logging cost your organisation?

l Do you have centralised logging in place today?

l Do you have multiple log retention technologies deployed in your organisation today?

l Do you have global log retention policies and standards defined?

l Do the capabilities scale?

l Do you currently have a consolidated log capability?

l How mature are your incident response capabilities?

l Are you concerned about targeted attacks and advanced threats?

l What is your mean time to containment of incidents?