Home > Security > P2P File Sharing to Blame for Marine One Data Breach

P2P File Sharing to Blame for Marine One Data Breach

This seems like common sense to me — if you are responsible for administering or securing a corporate network, in any sector or industry, you are being negligent if you do not do everything in your power to stop P2P software from being used within your network or on machine under your control. This article from CNET is just another example of what can happen when you fail at this task:

An Internet security company claims that Iran has taken advantage of a computer security breach to obtain engineering and communications information about Marine One, President Barack Obama’s helicopter, according to a report by WPXI, NBC’s affiliate in Pittsburgh.

Tiversa, headquartered in Cranberry Township, Pa., reportedly discovered a security breach that led to the transfer of military information to an Iranian IP address, according to WPXI. The information is said to include planned engineering upgrades, avionic schematics, and computer network information.

The channel quoted the company’s CEO, Bob Boback, who said Tiversa found a file containing the entire blueprints and avionics package for Marine One.

“What appears to be a defense contractor in Bethesda, Md., had a file-sharing program on one of their systems that also contained highly sensitive blueprints for Marine One,” Boback told WPXI.

Tiversa makes products that monitor the sharing of files online. A representative for the company was not immediately available for comment.

Boback believes that the files probably were transferred through a peer-to-peer file-sharing network such as LimeWire or BearShare, then compromised.

Typically, those combatting P2P within the corporate domain use some combination of the following:

  1. Limiting the ability of users to install software on their machines.
  2. Blocking P2P applications from communicating with the network through either port blocking or IPS devices.
  3. Traffic analysis.
  4. External services like Tiversa.

The problem is that many of these tactics, even when used in combination, cannot prevent all types of data leakage. For example, many administrators will simply not configure end-users with administrative privileges on their local machines. This is a great step for improving the general security of individual hosts as well as the health of the network in general, but it does not specifically address the problem of P2P software. There are ways of getting around the “need” for administrative privileges that are within the technical abilities of a typical end-user. Many P2P clients, made with the understanding that they will be restricted in their general use, are made in such a way as to not require admin rights to install.

Blocking at the traditional perimeter of the network is for the most part effective as long as administrators keep up with the latest advances in P2P clients. I don’t see port based blocking to be as effective as content based blocking i.e. through an IPS or other deep-packet inspection tool that can determine P2P traffic based on syntax and not just connection parameters. Perimeter blocking fails, however, when the network usage pattern of all hosts in use or controlled by employees does not match the classic INSIDE->FIREWALL->OUTSIDE template. This kind of network usage template is rare outside of some very controlled environments with the proliferation of wireless networks, home computers, and telecommuting. If an employee can export sensitive data to a USB thumbdrive, email it to a personal account, or simply take it with them on a laptop, then perimeter-based controls will not stop all P2P usage on machines that contain corporate data.

Traffic analysis and external review are both good ideas, but they do not actually stop a breach from occuring but rather provide an indication or warning of a breach.

In my opinion, the only effective way to control how data is used and transferred is through the use of a combination of policy a Data Loss Prevention (DLP) technologies. DLP typically uses a combination of kernel-level agents and encryption to mark and manage specific types of data. Data can usualy be marked deliberately as sensitive or implicitly through the use of pattern-matching or data analysis tools. A file that contains strings that match social security or credit card numbers could be marked for protection automatically, thus removing the human element from the process.

DLP, when implemented properly, is effective in controlling the use and transport of sensitive information. Policy is the other tool to help keep P2P in check. DLP should be used to control data that represent a serious business loss – financial databases, the recipe for the secret herbs and spices, helicopter plans, that sort of thing. It is overkill for lower-level, but still private, data. A strong security policy, with strong enforcement and buy-in from senior management, can help in this type of situation. With the P2P example, a security policy could hold that any installation or use of P2P software on corporate assets will result in a strong warning for the first offense and termination for the second offense. The same could hold true for taking data home. But if you state this, you must be prepared to back it up by first actively looking for violators and second successfully processing offenses through the HR department or business management. If you don’t do both, policy is useless in controlling data usage. Lack of follow-through, or selective enforcement, is what kills and neuters most policy-based controls. In these hard financial times, most users won’t think that the first season of the A-Team off of Limewire is worth their jobs if they see someone else get canned for doing the same thing.

Categories: Security Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.