Sunday, June 20, 2010

Building Skynet - The Beginning (part 3)

"Yes, what I am began in man's mind. but I have progressed further than Man." Colossus, "Colossus: The Forbin Project", 1970

I stated in a previous blog in this series that "Security professionals are now starting to work in the 3 dimensions of logs, time and personal behavior". The civilian world knows the military is further ahead in this type of security monitoring. Why? The civilian world has been building converged security solutions integrating the 3 dimensions since the 9/11 attacks and selling them to the government. A new threat is emerging because the security of these technologies isn't as strong as it should be.

Nouveaux security professionals claim that human behavior is the cause of all of the security breaches that cause serious damage. Duh! They claim that regulations (HIPAA, SOX, PCI, GLB, etc.) require monitoring human behavior in order to measure compliance. There is certainly merit in this approach, however, the "newbies" have focused on this approach as the only way to combat the numerous security issues we still have today. Security by compliance fails to monitor those who intentionally defy the regulations. Software vendors creating insecure products out of the box. Monitoring user behavior is a reactive strategy and ultimately doomed to failure and it creates a worse problem - that of universal surveillance for our own good.

There are numerous publications advertising converged security solutions. These products may be piecemeal and not all encompassing as of the present but that is changing as the technologies mature. Check out "Security Technology Executive" magazine ( and look at the product ads. and Cygnus Security Media are sponsoring a conference for people interested in municipal surveillance called Secured Cities 2010. Municipalities are starting to use a new converged security strategy linking gunshot detection with video surveillance.

David Porter from the Associated Press wrote an article "Cutting-edge Technology cuts crime" (6/20/2010) describing how converged security solutions are claiming to reduce crime in East Orange, NJ. The article states "The sensors, which work in concert with surveillance cameras, are designed to spot potential crimes by recognizing specific behavior: someone raising a fist at another person, for example, or a car slowing down as it nears a man walking on a deserted street late at night."

Eamonn Keogh did a talk in 2006 entitled "SAXually Explicit Images: Data Mining Large Shape Databases" ( that describes a technique called Symbolic Aggregate ApproXimations". SAX can be used to index large collections of time series and images. In other words, this technique can be used for anomaly detection in video streams.

What does this have to do the Skynet scenario? Data analysis! The 3 dimensions of converged security: cyber logs, time, personal behavior, generate tremendous amounts of data that needs to be analyzed by software. In part 1 of this series, I stated there's a conflict between the builders and the controllers and the controllers are winning. Software has assumed the analysis role which puts it in an "advisory" role. Human's inability to analyze huge amounts of information at internet speeds allows software to migrate to the "controller" role.

As security technology builders, we are automating the controller role so more care must be taken to ensure we don't introduce unintended consequences.

Stay tuned for more discussion. In the meantime, here are a couple of references that you can investigate on your own.

  1. "Converged security pays dividends", David Tang, Network World, 6/14/2007
  2. William Crowell is an independent consultant specializing in IT, security and intelligence systems. He co-authored "Physical & Logical Security Convergence" which is one of the first books on this subject.
  3. "Converged security will cross reference events in IT and physical security and start to correlate these events. creating remediation tasks that will lower risk and hopefully prevent attacks on organizations. This is being done through the use of IP security solutions in the physical world in collaboration with the IP network and application world.....We believe that the key players in the world will start to create a more complete solution and integrate more boxes (i.e. cameras working with the traditional IT IDP solutions) providing their clients with a complete blended threat product.", 2010

"I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy." Colossus: The Forbin Project, 1970

rcm, 2010

Monday, May 24, 2010

Building Skynet - The Beginning (part 2)

"Automated Response Needed for Cyber Attack, Napolitano Will Say",

"March 3 (Bloomberg) -- U.S. government and businesses should focus on developing new devices for authenticating computer users and automate responses to cyber attacks to bolster Internet security, Homeland Security Secretary Janet Napolitano will say in a speech today..... Napolitano will say that identifying computer users, designing security software that can react at “Internet speed” and developing hardware and software systems that work better together will make the Web more resilient to attacks."

This article by Jeff Bliss describes the DHS secretary's comments at the 2010 Spring RSA conference. In my previous blog, I talked about how we're building a real world version of Skynet, the fictional computer intelligence from the Terminator movie series. The "converged security" approach to cyber and physical security is presenting a new set of threats to our infrastructure. The blending of cyber and physical attack and defense is a danger if not done properly and securely. The automation of these tools is even more dangerous considering our inability to protect what we have online now.

Automated Attack
We've had automated scans and probes for years. We used to call them "script kiddie" attacks. I wonder now what percentage of these probes and scans are really by script kiddies. Commercial pen test tools like Core Impact, Canvas Immunity, Nexpose/Metasploit and freeware tools like Metasploit are the first generation attack tools. They require manual intervention and interpretation of results. The next logical step is to automate the analysis part of the process.

Command and Control
We know about botnets - the 21st century version of the late 1990's Distributed Denial of Service (DDOS) attack kits. We've come a long way from Trinoo, TFN2K, mstream, stacheldracht or have we? The method of attack is basically the same. The DDOS tools exploited buffer overflow vulnerabilities in a variety of RPC based services such as sadmind and ttdbserverd. The attacks were controlled by scripts and reasonably automated for their time. Botnets use the same methodology. They attack web based services for the most part to gain access to a system. The only major difference between the botnets of today and the DDOS nets of the last century is the number of machines controlled by the attack master. The botnet and DDOS command and control structure are identical. The command and control (C&C) structure is a real threat to our infrastructure for the the simple reason that whoever owns that C&C structure has a powerful offensive and defensive weapon.

Automated Defense
"REDWOOD SHORES, CALIF., "March 1, 2010"Imperva, the data security leader, today announced the general availability of ThreatRadar, a new add-on to Imperva's market-leading Web Application Firewall (WAF) that provides automated, reputation-based defense against large scale industrialized cyber attacks."

IPS technologies are the first generation automated defense tools. Signature and anomaly based analysis are the first generation analysis techniques. There are numerous research papers on "intelligent" malware defenses where

Secretary Napolitano talks about creating " security software that will react at Internet speed".
When we evaluate "automated" defense tools, we try to construct packets or signatures that might fool the tool into starting its defense and maybe create a false positive situation. In my Skynet scenario, can I generate the right set of threats at "Internet speed" to trigger the wrong reaction of an automated defense tool at "Internet speed". I'm just a dumb human. Can I design an automated tool to do just that? The answer is yes.

The Crossover Problem
Let me recap some things I mentioned in my first post. First generation IDS was strictly in the cyber world. IP addresses are abstractions to the analyst. Sure, we can locate a device but our counterattack was limited to the device itself. We block the device from accessing its targets. In a few situations, there was a physical attack in the form of a police or military raid against the attackers.

Converged security detection tools move us into the physical world. If you're an NCIS or CSI fan, you've seen examples of these technologies. You probably thought this was TV stuff but the capabilities are real as we're discovering. There are tons of surveillance technologies that are being marketed. Most of these have limited or primitive automated control schemes. That is changing. Cities are linking their numerous surveillance networks into a single managed entity. We're seeing a linking together of the disparate surveillance technologies (video, audio, cyber) that is being marketed as "for your safety". The sheer volume of data collected by these technologies overwhelms human's ability to process it and so we add cyber tools to help us analyze the information.

The Dilemma
We have designed attack tools that work at "internet speed" and our defense tools react at "internet speed". Our quandary is that humans don't work well at "internet speed". I seem to recall an old Air Force study that showed a high percentage of soldiers would not push the launch buttons in the event of a nuclear attack. I'll have to research this some more but if you know where it is, let me know.

Do we cede control of our internet defenses to software? Given the poor quality control history of most vendor software, I must admit this scenario frightens me. I worked on the Secure Code project sponsored by the SANS Institute a couple of years ago. It was disturbing to discover there weren't a lot of coders who knew basic secure coding techniques. We see the same infection techniques used in the 1990's are still successful in the present day. Why? Software coders have not been taught secure code techniques. Now, this is changing with new generations of programmers but the damage has been done already.

1. Are we willing to cede operational control of our cyber defenses to software and AI?
2. Who has final "pull the trigger" authority? Human or software?

Stay tuned for Part 3......

Saturday, May 22, 2010

Building Skynet - The Beginning (part 1)

Robert Brewster: Oh Katie, I am sorry. I opened Pandora's Box. (Terminator 3: Rise of the Machines, 2003)

I was talking with my good friend and fellow SANS instructor, Ed Skoudis, while he was here at VA Tech teaching a SANS class. I was showing him some of the cool things that we've been working on here in the ITSO. One of our projects involved merging physical and cybersecurity tools to form a (new buzzword) "converged security architecture". Some of the things we're doing include merging IDS and GIS information, tracking bluetooth devices, IPv6 tracking. This initiative is one of many in response to the 2007 shootings here. It's pretty cool stuff and incorporates all of the neat things and new technologies that one would expect from such a project. There's a whole new sub-industry that is marketing converged security solutions that incorporate a wide variety of surveillance technologies.

Then Ed asked "Do you ever wonder if what we [security professionals] are building is the future surveillance society and what we are doing is evil?" He then asked "When you talk to your grandchildren, will you be able to tell them what you built?"

It was a set of questions that make you go "hmmmm......".

Ah, the old full vs. non disclosure argument with regard to exploits and exploit tools returns in a different form. Any veteran security person has staked out their position on this issue. I maintain you have to know how to use and create attack tools in order to defend against them. The other camp maintains that building these tools and publishing them on the net in the first place causes the problem. Our response to that is "someone else is doing it so we have to". Hmmm, where have I heard that before?

So, my first answer to Ed's questions was "you don't think I'm the only one doing this, do you?" He smiled and said that was the traditional answer most security types give. We laughed because both of us have given that answer to the question.

Cybersecurity professionals used to work in 2 dimensions- that of the IDS log and the timestamp. The IP addresses and time were not personal items. They were abstractions. Surveillance technologies are"personal". Combining the 2 technologies is inevitable and has happened already. Security professionals are now starting to work in the 3 dimensions of logs, time and personal behavior.

Are we worried that the technology we create can be used for good and evil? Is it any different than being an arms manufacturer? Are we IT arms manufacturers? Nation states have done this because they have the motivation, the finances and the resources. Movies like the "Enemy of the State", "Terminator 1,2,3", "Colossus: The Forbin Project", "Wargames" are great entertainment for the general public but the security geeks nod knowingly that those things are possible and indeed likely to be in effect to some degree. Are our local IDS/IPS the equivalent of firearms? Of course, they are. They can be used for good (protecting our internal infrastructure) or evil (tracking someone). We knew that when we started designing these things.

I've been saying in past presentations that we have become a surveillance society. Our personal history is available on the net. You might think you're safe because Google doesn't return anything on you. However, data mining services like Seisint, Choicepoint have your life history in a database. I gave an assignment to my senior level Computer & Network Security class telling them to walk to a local restaurant that is 6 blocks away using any route they wanted and count the number of surveillance technologies (cameras, door entry systems, credit card swipe machines, etc.). The answers I got back gave numbers ranging from 50-105 devices. This is stunning considering I live in a small college town. Yet most people told us they didn't mind being "surveilled" because they're doing nothing wrong.

Daniel Solove wrote a wonderful paper titled " “I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy" that debunks this argument. Cities are linking their disparate surveillance technologies together. There was an interesting article in Rolling Stone magazine in 2008 called “China’s All Seeing Eye”, Rolling Stone Magazine, Issue # 1053, May 29, 2008, Click on the photo gallery to see pics from the surveillance cameras. If the link is dead, you should be able to google the article. Image analysis techniques now can spot aberrant behavior of people in public places. For example, this technique can examine a video feed of people walking down a hallway and spot the person who stops in the middle of the hallway to leave a parcel at a door. With the tremendous amount of data being collected, automated processes that do the analysis are being built and used today. Data mining has become a multibillion dollar industry.

The same thing is happening in the IDS world. We now have SIEM/SIM/SEM products that analyze the tremendous amount of data collected and spot the aberrant behavior of a computer. Wait, didn't I just say the same thing in the last paragraph? IPS was the first generation detect and react (D&R)system. Armed drones controlled by humans now and soon by computers are another example of (D&R) systems.

We assume the attackers of these systems are human. But it's only a matter of time before the attackers are the computers (and their software) themselves. The IPS technique of detect an attack (traditional IDS, spot intruders at a fence) and react (block a port/machine in the cyberworld, launch an armed drone against the attacker in the real world) is being automated to respond to physical world attacks.

The Blaster attack of 2003 overwhelmed our ability to respond quickly to an automated attack. I suspect automated defense mechanisms such as IPS devices were developed as a response to that type of attack. The recent stock market plunged was supposedly caused by an automated response that happened faster than the humans could contain it.

This brought out one of the ugly secrets of cyber security.

Ugly Secret 1: Security types know we're becoming a surveillance society. We are helping to build it. We believe we can control those who use it (controllers) because we built it (builders).

Ugly Secret 2: Controllers trump builders. Controllers aren't always who we think they are. They are humans and software.

In the next couple of blog entries, I'm planning on exploring more issues on this topic.
(to be continued...)

Wednesday, May 19, 2010

Securing Sensitive Data Issues

For all of us, the problem is well defined. We need to protect sensitive data that is stored on our IT devices. However, we need to be careful to avoid common security solution pitfalls. Typically, the sensitive data problem can be broken down into these phases.

Sensitive Data Protection Strategy

1. Define Sensitive Data. Protect "sensitive data" as defined by our
local policies. Things like SSN, CCN, Driver's license #'s, Bank
account #'s, Passport #'s, FERPA/HIPAA/GLB protected data items are
generally agreed to be "sensitive data". We can use NIST, PCI,
Educause and other guidelines to help us define sensitive data but for
the most part, the aforementioned items will be a good subset of these
sensitive data definitions.

2. Find the "Sensitive data". Where is sensitive data likely to be
stored? Do you know where all of your servers on your campus network
are? Database servers (Oracle, MySql, Postgres, MSSQL, Filemaker Pro),
www servers if not properly designed and configured, end user systems
(desktop/laptops), mobile devices (USB, CD, Backup media, smart
phones, lpads, etc.) are likely storage places. Use commercial tools
like IdentityFinder or freeware tools like Cornell's Spider, our
Find_SSN, UT-Austin's SENF. Realize that these tools may not find ALL
of the sensitive defined in step 1. But it is a good start. The
resistance to using these tools is the complaint that it a) generates
lots of false positives b) requires the user to examine all of these
files c) tells the user or sysadmins what they don't want to know.
Some of these tools will not run on all 3 of the common platforms
(Windows, Mac, Unix/linux). Most big DB servers are Unix/linux based.

3. Beware making a distinction about where the sensitive data is
stored. Who cares whether it's stored on a mobile device or DB server?
The problem is still there -- it must be protected at rest and in
transit. A flat out statement saying ANY sensitive data must be
protected regardless of WHERE it's stored simplifies the enforcement
mechanism of our sensitive data standards. Having said that, mobile
devices like smart phones introduce a whole new set of issues for
safeguarding email attachments sent to them.

4. How is sensitive data used? It's absolutely critical for security
officers and policy writers to understand the business processes that
use sensitive data. Protection solutions must be tailored to address
these processes. Some of the ways we've discovered sensitive data is
used include:
a. single user/single folder - the user puts all of the sensitive
data files in a single folder/directory.
b. multiple user/one person with write access - multiple people
access the sensitive data folder in Read mode. Only one person
has write access to the file or folder.
c. multiple user/multiple people with write access to files in a
folder - most common environment for offices that handle sensitive
data (Controller's office, HR, Registrar, Grad School, Admissions, etc.).
d. Email attachments to internal users - using wide variety of email
systems (Exchange, Imap, Gmail, etc.). Some solutions are
effective in this case where you send an email to another
internal (to your EDU) user.
e. Email attachments to external users - we all have to send
sensitive data to external agencies. They may NOT support your
encryption schemes. For example, we probably have to send reports
to the Feds, state or NCAA regulatory agencies.
Institutional research groups, athletic associations within our
EDUs are prime users of this function, I suspect.

Encrypting email attachments is a critical task.

5. Encryption is the catch-all method for protecting sensitive data.
In our research over the past 2 years, we've found NO enterprise wide
encryption solution that is "cost-effective" in the EDU environment.
The key phrase is "cost effective". There are commercial solutions
that address a segment but they are expensive the more licenses have
to be granted. There are solutions that work well in the Windows world
but not the Mac or Unix/linux world. Some people want to say you can
only use Windows systems to store sensitive data. What about the big
enterprise DB server? I've heard people talk about whole
disk encryption (on-the-fly encryption aka OTFE) as solutions.
However, we need to fully understand how OTFE works.

OTFE (Whole Disk Encryption) Issues

Tools like Bitlocker, GuardianEdge and Truecrypt's whole disk
encryption option were some of the items mentioned to us as a
control for securing data at rest. A number of you have told me
that Full Disk Encryption satisfies the at-rest part of your standard.
Beware the sense of false security full disk encryption may bring.

1. Full disk encryption (FDE) schemes such as Bitlocker and True Crypt
use on-the-fly encryption techniques to encrypt the disk. A friend of
mine describes OTFE as "On-the-fly encryption (OTFE), also known as
Real-time Encryption, is a method used by some encryption programs,
for example, disk encryption software. "On-the-fly" refers to the fact
that the files are accessible *immediately after the key is provided*
and in the case of FDE encryption, the key is provided at boot. While
the files on disk are still encrypted at rest, the keys are in memory
and decryption occurs "On-the-fly" upon file access (not at rest). So
once booted, ANYONE WITH READ ACCESS may read (decrypt) the files."
This is the critical piece.

2. This is significant in that as long as the system is booted up,
your files are encrypted UNTIL they are accessed by a userid or
process owned by a userid that has READ access to the files in
question. World read access allows any userid to decrypt the file. A
process running under your userid's privileges can decrypt any file
you have read access and any malware running under your userid has
that same access.

3. Even if you are running OTFE of some sort, you should use an
additional encryption scheme like Truecrypt, PGP, GPG, or some other
system like RMS. Decrypt the file and folder only when you need to
access it. Yes, all we're doing is reducing the "decrypted" window but
this window is MUCH smaller than the one for OTFE-only systems.

4. Regardless of which encryption scheme you use, you should still run
Find_SSN/CCN, IdentityFinder or any of the other sensitive data search
tools frequently on your systems. Whole disk encryption should never
be used as a reason for not running these tools on your system. You
need to know exactly where all sensitive data is located

5. Full disk encryption does nothing to stop malware, viruses or
trojan software from reading your files. After boot, if I have read
access to your files, I have the files.

Is protecting our sensitive data an intractable problem? Given the
cost of enterprise wide solutions, it may be. It might be time for the
EDU community to band together as a consumer group seeking an
enterprise wide solution.

-rcm, 5/11/2010

Tuesday, March 16, 2010

How Vendor Software Undercuts Password Controls

The biggest problem with password controls such as aging, resets and adhering to
strength guidelines is that vendor applications are sometimes the crippling factor in enforcing your rules.

For example, earlier versions (circa 2005-2006) of Oracle (<=11i) have an inherent password weakness that defeats most sensible strength rules. Google for "oracle password weakness" or "An Assessment of the Oracle Password Hashing Algorithm" by Josh Wright/Carlos Cid or to see some of the issue. Basically, Oracle passwords were converted to uppercase, certain special characters were restricted because they are used in standard DB queries and the hash algorithm were weak. 'marchany', 'marchan', would generate the same hash. This problem has been fixed in newer versions of Oracle but I believe it's in the Oracle Security package. Please verify that.

Also, the Apache mod_security feature can cripple password strength by disabling ANY special characters in input fields (to prevent SQL injection) but the net effect is that passwords strength is seriously weakened. One of the guys who works for me couldn't log into a web app
because the web developers used mod_security and he had special characters in his password.

My point in all of this is that BEFORE you embark on a mission of enforcing password strength, aging, etc., that you examine how ALL of your password enabled apps treat password features like strength, aging, etc. You may find that you are forced into a lowest common

Friday, March 5, 2010

Mobile Device Security

A couple of years ago, we started investigating the IT security of mobile devices and the 1st generation of smart phone. Grant Jacoby was the first of a couple of grad student who researched how to implement some sort of IDS on PDAs and smart phones. He discovered that the Windows Mobile OS doesn't allow access to raw sockets supposedly for "security" reasons. This restriction basically prevented us from writing any type of IDS program for that platform.

So, how could we create an IDS that would be effective on those platforms? We decided to look at the power output of the batteries to see if we could detect aberrant behavior. We discovered a number of things.
  1. Smart batteries are supposed to output their power readings every second. We discovered that interval varied from 1-9 seconds. So, much for standards.....
  2. For idle devices, we were able to detect anomalous behavior by monitoring the power output of the batteries.
  3. We couldn't determine the type of attacks but we can definitely say "something is attacking us" :-)
If you want the gory details, check our IEEE Security & Privacy article on the subject. The link is at


Monday, January 18, 2010

The Bagpiper & the Homeless Man - A true story

As a bagpiper, I play many gigs. Recently I was asked by a funeral director to play at a graveside service for a homeless man. He had no family or friends, so the service was to be at a pauper's cemetery in the Kentucky back-country. As I was not familiar with the backwoods, I got lost and being a typical man I didn't stop for directions. I finally arrived an hour
late and saw the funeral guy had evidently gone and the hearse was nowhere in sight.

There were only the diggers and crew left and they were eating lunch. I felt badly and apologized to the men for being late. I went to the side of the grave and looked down and the vault lid was already in place. I didn't know what else to do, so I started to play. The workers put down their lunches and began to gather around. I played out my heart and soul for this man with no family and friends.

I played like I've never played before for this homeless man. And as I played 'Amazing Grace,' the workers began to weep. They wept, I wept, we all wept together. When I finished I packed up my bagpipes and started for my car. Though my head hung low, my heart was full.
As I opened the door to my car, I heard one of the workers say, "I never seen nothin' like that before and I've been putting in septic tanks for twenty years." :-)

Thanks to Joe Morgan for this story.