How To Fix A Leaking Wiki

on August 16, 2011 at 3:10 PM

Among all of the many breaches of data over the past year across both the public and private sector, none seems to strike more fear into the hearts of federal agencies, managers and security personnel more than Wikileaks. To be sure, the Wikileaks incidents have caused more than their fair share of reputational damage, embarrassment to federal officials, criminal response, and calls from angry Congressmen to establish yet another layer of regulation and auditing to help prevent this type of issue in the future.

But what’s really insidious about Wikileaks is the nature of the crime: unlike the external attacks perpetrated by Anonymous, LulzSec, and other third parties by breaching security controls, Wikileaks was allegedly committed by an insider who already had access to the information he is suspected of leaking to Wikileaks.

What makes this threat so terrifying to many security people is the fact that security controls were not really broken in the commission of this crime; and in fact, the suspected perpetrator may have needed access to the information he leaked in order to do his job.

How do you monitor breaches of security controls, when no security controls are actually breached?

This type of security threat poses an interesting – if not downright disconcerting – question for security professionals: how do you monitor breaches of security controls, when no security controls are actually breached?

The Wikileaks issue really represents a new aspect of information security: user behavioral analysis.

Unlike traditional security monitoring efforts where known abnormalities in data can be easily flagged based on common alerts. Think of a SIEM (Security Information and Event Management) solution reporting on failed logons to critical systems; an IDS (Intrusion Detection System) identifying a signature-based attack; or a firewall blocking peer-to-peer traffic. Wikileaks presents an immense problem in which traditional security controls provide very little value.

Instead, organizations need to focus on how users are actually utilizing systems and data, based on the premise that they already have access as a prerequisite to performing their job.

Patterns must be established and monitored irrespective of security controls. Instead of monitoring a failed attempt to access a critical file or database record, security professionals must answer more complex questions:

  • “Is the user accessing more files than they normally do in a day?”
  • “Is the user submitting an unusually high number of print jobs?”
  • “Is the user copying data to an unusual location?”

When the employee or contractor already has the ability to do these things, traditional security monitoring capabilities provide no visibility into the threats that can presage a Wikileaks type of attack.

Ultimately, this type of insider threat is not something that’s truly “new” – organizations have been losing data to insiders for decades. But Wikileaks was a major wake-up call that proves very public dissemination of sensitive data (even if that data is not classified) presents a problem for federal agencies that requires much more than the traditional “confidentiality, integrity and availability” paradigm that has become the hallmark of information security efforts for the past two decades.

Fixing a leaking Wiki is going to require new techniques, new tools, and most importantly, a new way of thinking among information security professionals.

John Linkous is vice president, chief security and compliance officer eIQnetworks, Inc.