Monday, August 31, 2020

RDP Security Tip and other Infographics

 Thanks to Thomas Roccia for this great resource he created. It's at https://medium.com/@tom_rock/security-infographics-9c4d3bd891ef. I think you'll find these graphs to be particularly useful in any presentation you do. 

We've been asked a lot about Remote Desktop security given the WorkFromHome (WFH) situation we're in during the pandemic. It is a serious problem and here's a great infographic from Thomas' site. 



Saturday, August 8, 2020

Academic Freedom and IT Security - They Do Work Well Together

 I was a member of a panel on Cyber Hygiene that was sponsored by the SANS Institute today. My good buddies, Tony Sager and Russell Eubanks were also on the panel. 

An attendee asked me about the challenge of balancing IT Security practices vs. the cherished Academic Freedom (AF) issue. I responded that IT has to stop being the Department of NO and go out and listen and learn how researchers do their thing. Only then should they decide on a path that supports rather than hinders their research. It's harder to take the time to meet and learn how end users actually do things  given the multitude of tasks most IT people need to perform in their normal course of duties. Understanding how and why your end users do things allows you to design and build a more efficient IT Security program and architecture. Short term pain eventually leads to long term gain. Taking the time to understand how your end users actually use your IT services will actually lessen the amount of time you have to spend outside of your normal duties in the long term. 

It was a great question and it got me thinking about the issue a little more and hence, this blog entry. I've been working in EDU IT for 45 years now and here are some musings on this balancing challenge.

I went on a motorcycle ride and got to thinking more about the question while I was riding through the mountains. It occurred to me that there should be no conflict between IT security and AF principles.  IT Security practices should enhance and protect AF. One complements the other. 

First, let's try to define "academic freedom" for the purpose of this blog. Here are some definitions that I'll use as my foundation. Academic Freedom is defined as:

1. a scholar's freedom to express ideas without risk of official interference or professional disadvantage. "we cannot protect academic freedom by denying others the right to an opposing view" (Oxford Dictionary)

2. Academic freedom means that both faculty members and students can engage in intellectual debate without fear of censorship or retaliation. (https://www.insidehighered.com/views/2010/12/21/defining-academic-freedom)

3. Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties. Teachers are entitled to freedom in the classroom in discussing their subject, but they should be careful not to introduce into their teaching controversial matter that has no relation to their subject. (https://www.aaup.org/issues/academic-freedom/professors-and-institutions)

After reading these definitions, I tried to see what the conflict was between IT practices and Academic Freedom (AF). Frankly, I saw more opportunities for IT practices to support, secure and protect AF. All 3 of the above definitions emphasize the right of the academic community to discuss freely any topic without the fear of censorship or retaliation. Looking at this from the IT Security point of view, here are some threat scenarios to AF in the online world.  A sample threat would be attacks against the Confidentiality, Integrity and Availability (CIA) aspects of AF.

For example, let's look at censorship. DOS/DDOS attacks,  domain blocking, confiscation of servers or endpoints are examples of availability attacks. Unauthorized modification of topics/data is an example of an integrity attack. Doxing is an example of a confidentiality attack. 

There are existing IT Security practices that can mitigate the effects of these classes of attacks.  Availability threats such as DOS/DDOS attacks can be deflected. Domain blocking can be addressed. Good file permission strategies along with good backups, file integrity tools can mitigate integrity attacks. Hunting down doxxers, online "bullies" can be done using techniques such as OSINT and log analysis to protect  individuals from harassment or retaliation.

Sound IT Security practices can and should be done to further advance academic freedom. I think the supposed conflict between IT Security and AF is not the big issue everyone outside of the EDU world thinks it is. 

To the webinar attendee who asked me the question of balancing IT Security practices with Academic Freedom, let me say IT Security practices should support academic freedom by designing procedures for protecting one's right to academic freedom. It should never interfere with that core business process.

This is my short answer to this question. I'd like to hear your opinions on this matter.

8/8/2020

Friday, August 7, 2020

Encryption, Security and Privacy, Oh My!


We’ve been hearing a lot of discussion about encryption these days. The Federal government proposes installing “backdoors” in encryption algorithms to allow law enforcement and security groups to be able to monitor communication between entities who pose a threat to “our” security.  We’ll talk more about this later but we want to emphasize this is an “age old” argument.
Clay Bennet won a Pulitzer Prize in 2002 for an editorial cartoon that expertly explains the security vs. privacy issue. Imagine a house, two people inside it and a wooden fence around the house. The house has a label that says “PRIVACY”. Workmen are removing planks from the house and using them to build the fence that has a label that says “SECURITY”.  Security vs. Privacy is like a see-saw. The more security you want, the less privacy you have. It is not a “vice versa” situation. More privacy does not necessarily mean less security. Security advocates usually say “if you’re not doing anything wrong, then you shouldn’t be worried”.  There are lots of flaws with this argument. The most common one is “who defines what is the definition of “wrong”? Does wrong mean “illegal” or dissent, for example. A common definition of privacy is the “right to be left alone”.
Encryption provides a way to hide something you send or store from unauthorized entities. It can be as basic as speaking a foreign language to someone or using something based on high order mathematics. For example, the Navajo code talkers used their language as an “encryption” method of communicating without the enemy being able to determine what was being said. As with any process, it can be used for good or evil.  You “break” this encryption technique by using someone fluent in the language being used.

In the 1990s, the Federal government proposed a method (the Clipper chip) allowing law enforcement and security groups to decrypt encrypted information. The resulting uproar was instrumental in shooting this proposal down but it showed how people didn’t understand how encryption works. The “clipper chip” was a “backdoor” way to decrypt a file or transmission. Suppose you put your tax papers in a vault to protect it from unauthorized access. You use a lock and key to gain access. A backdoor would be something like a master key for that lock that allows it to be unlocked. Common sense tells us the master key a) needs to be guarded all the time b) the person who has the master key isn’t evil and c) the person who has the regular key knows a master key exists.

So what’s the problem? Well, in the digital world, copies can be made without the owner’s knowledge. Any good hacker would try to get that “master” key and use it. It’s folly to assume a digital “master key/backdoor” would never be compromised. The 2011 RSA hack and 2013 Carbon Black attack are examples of hackers going after the “master” keys with success. While the whole purpose of encryption is to protect data at rest and in transit, there are ways to try to get the data in its original form.  Consider the following:

A -> K -> M1  ->C1-> EC -----------à DC -> C2 -> M1 -> file/display -> B

Person A uses a keyboard K to create a message M, stores it on computer C1 and encrypts it using tool EC. The encrypted message arrives at the target machine, is decrypted by tool DC running on device C2,  the data M is either stored in a file or shown on the display to person B.  The message is encrypted only from EC on C1 to DC on C2.  Attack points where the data could be copied are at K, C1 C2, M, file/display. Note these attack points do NOT need to know your encryption key. Why? The data is in the clear when it’s entered at K, stored in a file M1. If you write a program to grab the data at these points, you get the data in the clear.

This is nothing new. The first public reporting of this technique was done in 1998 when the FBI used a keystroke recorder against a mafia don’s computer. The recorder allowed them to collect information used to prosecute him. The keystroke recorder copied the data as it was entered before it was encrypted. The 2001 Magic Lantern tool and the 2009 CIPAV (Computer and Internet Protocol Address Verifier) were law enforcement tools developed to get data before it was encrypted.
These were every effective techniques and did not require a “backdoor” to an encryption algorithm.

So, let’s go back to the Privacy part of this essay. Those who advocate the “security” argument maintain there’s a need to be able to determine if criminal intent is planned. If criminals use encryption to hide intent then the government needs to be able to decrypt those messages.  There are ways to get data before it is encrypted so why the need for a backdoor? We need to remember that a message or data in a file starts as cleartext. Data capture techniques have been around for the past 20 years. Since they don’t require a backdoor to the encryption algorithm, one could assume the real target is privacy.  Why? The introduction of backdoors into any encryption algorithm destroys the algorithm as an encryption tool. The backdoor(s) will become publicly known eventually and encryption ceases to exist.

What’s truly ironic about this contention is that individuals are freely giving up lots of personal information to commercial companies.


This is a reprint of article originally posted on https://encryption-and-data-loss-protection-solutions.enterprisesecuritymag.com/cxoinsight/encryption-security-and-privacy-oh-my-nid-1455-cid-5.html

Out with the Old! In with the New! Perimeter Border replaced by Data and Identity Borders - Some Thoughts



These are a few questions that I'll address in upcoming blog posts.
  • Are the industry threats your threats? Just because the magic quadrant says Threat A is the critical threat you need to address doesn't mean that it applies to your network. What metrics have you collected to determine the root cause of compromises or breaches in your org? While phishing is one of the major threats touted in cybersecurity mags, is it the root cause at your site? For example, for us, the 2 major root causes that led to breaches (big ones) that affected the entire institution were a) poor password management b) failure to apply OS and application patches in a timely manner. While we did have lots of successful phishing attacks, the consequences of those hits was limited to 1 or 2 people - the person who fell for the phish and/or immediate family. On the other hand, a sister institution found almost the opposite of our results. Phishing was a primary vector in their case. My point is that we need to take the time to evaluate the real causes of successful attacks  against our infrastructure/data/credentials and then use this information to buy/build tools/processes to address those threats. This helps us avoid wasting money on defensive tools that address 1% of successful attacks against us. 
  • The New Borders - Your Identity,  Data. I used to say (still do) that the effective security perimeter is the device and not the border. As more and more devices become "personal" and not "organizational", the border becomes your phone, tablet, laptop, server, etc. BYOD is forcing us to adapt to this new paradigm.  Mobility becomes the new data flow process. 
  • Work From Home (WFH) has drastically changed the "border".  
  • Both ends (endpoint clients, servers)  of the traditional client-server process aren't necessarily inside your traditional "border". How are you approaching the visibility issue?


Monday, April 16, 2018

Why Corporate Security Should Be Like Museums? Edus Are.

I was preparing a talk for the 2018 Educause Security Professionals Conference and was trying to think of ways to show how EDU networks are really microcosms of society. I wrote in an earlier blog that EDUs are small cities. I've said that our network security strategy is a blend of commercial and ISP requirements. It was wasn't until I ran into my friend, Christian Schreiber, who gave me the best analogy so far. As a CISO, I have to give a presentation to our Board of Visitors, our version of the corporate board of directors, every now and then. I like to use real world examples to explain our security strategy. Most board members come from the corporate world and want to know why we don't follow a corporate IT security strategy.

Well, Christian was working on a talk and he said EDUs are like a museum. At first, I thought he was going to tease us about being quaint, staid and stuffy. Rather than state the obvious :-), he pointed out the following:
  1. Museums allows all sorts of individuals into their building.
  2. Museums have high value assets and protect them with a variety of tools, technical expertise.
  3. Key assets are highlighted to make them more accessible to the public.
  4. Museums cover their interiors with a wide variety of tools.
  5. Museums focus on detecting malicious operators who may be already inside the building.
 Christian went further and give some examples of museum defense in depth:
  1. Museums have few access points but they allow free flowing access to anyone.
  2. Museums erect additional barriers around high value assets.
  3. Museums have pervasive monitoring tools: video cameras, motion detectors, laser detection systems, visitors logs.
  4. Museums have numerous active response capabilities such as: uniformed guards, on-demand barriers, fire suppression systems, moving doors.
  5. Museums have recovery systems such as insurance and tracking devices embedded in high value assets.
  6. Museums assume there are hostiles inside their buildings.
As you can see, there are Continuous Monitoring, Zero Trust Network, network forensics components embedded in the bullet items above.  They allow visitors to bring their own devices, take pictures, buy souvenirs and wander freely within public spaces. They also have restricted areas that require additional authentication and authorization.

IoT, BYOD have been forcing orgs to reconsider how their network security should be implemented. The traditional border security model will fail in the new technology model unless they adapt to a mobile user environment. I used to say the device was the border. Nowadays, I believe there are 2 new borders that need to be considered:
  1. User identity - users access their work/home assets from all over the internet. For example, EDUROAM allows members of one EDU connect to the internet using another EDU's net and the member's home institution credentials.
  2. Data - If data becomes the new border then does it matter where it's stored? If its protection schemes focus on the data element itself, then I don't believe it matters.
Given these 2 new borders then the museum defense model makes a lot of sense. This doesn't mean that you should discard the older perimeter style defenses but it does mean the combination of these layers forms the basis of a reasonable, successful  museum defense.


Friday, January 5, 2018

Cybersecurity's Biggest Mistake - The Daystrom Syndrome

I've been very fortunate to be part of the design team of the Virginia Cyber Range (www.virginiacyberrange.org). The range is designed to a) be a course repository (full course material,  individual course modules, individual lab exercises) for NSA CAE schools in VA and K-12 school in VA and b) provide an environment to run these classes and exercises from any location in the world. I'll have more on that in a later blog. One of the unexpected surprises in the project is the enthusiastic adoption of the Range by the K-12 schools. K-12 teachers were caught in the middle of a number of competing worlds:
  • Federal and state political pressure on school systems to include cybersecurity concepts in K-12 classes
  • School system pressure on K-12 schools to do the same
  • Local (principal) pressure on local faculty to develop these courses
  • Teachers are unable to create these environments because of school system and local IT resistance to build the environment needed to teach these classes.
That last bullet item turned out to be the major stumbling block in implementing these education programs. Why? As you probably know, local school systems have tightly regulated, locked down and restricted access to the internet from their school networks. Some of the reasons have to do with parental concern on questionable material/people on the net getting access to K-12 students; general concerns of the school IT staff to protect systems and data from unauthorized access. I suspect the real reason is a lack of funding to increase IT staff sizes  and provide training to said staff. When you're 1 admin for 1000 machines, you're not going to allow special cases simply because you don't have the cycles to provide the required support.

I came from the sysadmin world and remember the "prime directive" of sysadmins: "Keep the systems running at all costs". This directive, while noble, has caused more security headaches over the past 25 years. Simple things like patching OS, applications and hardware for security issues run into the sysadmin prime directive which resulted in security vulnerabilities not being corrected in a timely manner.

This reminds me of the "Ultimate Computer" episode of Star Trek (TOS). The Enterprise was fitted with the new M5 computer which automated the ship's handling, offensive and defensive capabilities. When things went south quickly because the M5 started behaving in a dangerous manner, Dr. Daystrom was blind to what the machines was doing because of his loyalty to a particular train of thought ("You don't shut a child off when it makes a mistake. M-5 is growing, learning."
"Learning to kill." "To defend itself. It's quite a different thing.")

 Sysadmins were infected with the "Daystrom syndrome" where we became so involved (enamored?) with our technology that we lost sight of the real goal of our technology: to allow people to use the technology in a meaningful way to themselves and to business.  Some examples of this Daystrom Syndrome variant include:
  • making systems harder to use for the sake of "security" of the system
  • restricting how users can access information that is "questionable" to the IT person but not the user. We're not talking about porn here. We're talking about using the Internet as a research tool to get software, algorithms, etc. that make our business more efficient and how this behavior is restricted by IT because of security issues.
  • not patching systems because that would required them being unavailable for a period of time. This downtime violates the 24x7 availability rule that is one of the governing things that sets sysadmin behavior.
  • Anything that causes the user to say "IT won't let me do this"
  • Anything that causes sysadmins to say " users will wreck our security, availability, stability".
Sysadmins and their upper mgt have forgotten the prime reason why IT exists in business is to allow the business to make more money (grow the business) by making business processes more efficient.

Let me come back to the Range and K-12 scenario. The conundrum is the K-12 teachers need to build machines that can connect to the net and be able to be configured, modified by teachers and students. Let's also face the fact that most school IT suffers from low budgets and the IT machine/staff ratio is frighteningly high. These factors combined with the Daystrom syndrome means the K-12 teachers are told they can't use the school systems or net to build these cybersecurity classes. The Range provides an environment that allows teachers to actually create a space for their classes without IT interference. The school IT just have to allow web access to the Range. Unfortunately, this sometimes is easier said than done.

This brings me back to my premise - IT has created a worse security problem than the one they were trying to solve by imposing unnecessary restrictions on user behavior thereby preventing them from doing their jobs which encourages them to bypass these restrictions.

It's time for us to rethink the model.

Monday, June 5, 2017

Assume They're In Your Network Already



1       Background


Traditional network border defense strategies have focused on a) keeping intruders out of a network b) protecting internal devices from compromise. Historically, sites have implemented their security strategy from the border inward rather than from the endpoint outward.

895,871,345 records have been breached as of 2/21/2016 according to www.privacyrights.org. Data from this and similar sites suggests the traditional border network defense model has failed as a data protection strategy.

Border firewalls are not effective "protection" devices. They are, however, excellent "detection" devices. Why? Firewalls always have to let data pass through them. Wireless networks negate the effectiveness of a "border" firewall  by forcing the network border to be at the endpoint. Whitelisting outbound traffic is a challenge because most sites are now hosted by companies like Akemai which host thousands of sites. However, firewalls log packet traffic and this information is valuable in network forensics.

Continuous monitoring (CM) is an effective strategy to detect and interrupt data exfiltration.  Seth Misenar and Eric Conrad [1] list 4 points that show why Continuous Monitoring (CM) is a better strategy for detecting, preventing and/or interrupting data exfiltration. The 4 points are:

1.     Highly portable devices don’t benefit from the traditional border network defense model.
2.     Client-side exploitation significantly decreases the effectiveness of traditional network defense architectures.
3.     Lateral movement inside your network after a compromise increases the likelihood of endpoint exploitation.
4.     Endpoints must be able to defend themselves and aid in detection.

Monitoring outbound traffic allows a site to use CM techniques to determine if a data breach has happened. Unauthorized data transfers are rarely detected by traditional IDS, IPS or firewalls because intellectual property isn’t just the standard social security, credit card, driver license, bank/debit account numbers.  Intellectual property is harder to classify because the “sensitive” data elements are not the traditional items that DLP solutions can find. Netflow monitoring techniques can be used to detect anomalous traffic patterns.

2       Hacker Attack Strategy

When hackers attack a site, they have 3 primary goals:
  • ·        Compromise the endpoint and search for data that can be stolen. 
  • ·        Maintain control of the endpoint so it can be used to attack internal and external systems.
  • ·        Be able to destroy the system to eliminate evidence of a compromise if discovered.
Hackers have adapted to inbound blocks by tricking internal users into initiating an outbound connection to the malware site. For example, the infostealer malware class searches the target system for sensitive data such as SSN, CCN, bank or debit account information, builds a list of files containing these data, phones “home” to let the hacker know it has data ready for exfiltration.
A compromised machine has to communicate back to the hacker when an attack is successful. If defenders interrupt the communications/control channel established, a data exfiltration is prevented or interrupted. This also prevents the hackers from issuing a “self destruct” command to cover their tracks.

3       Continuous Monitoring Defense

Prevention eventually fails but detection and containment are forever. CM assumes the attackers are inside your network and provides the data to find them. The defenders' best chance for containing the attack lies in interrupting hacker goal #2. Here’s how CM can help determine if a data breach of personally identifiable information (PII) has occurred.

1. The general security strategy should be "protect (encrypt) sensitive data regardless of location." Protecting devices is obviously important, however, if the sensitive data is protected then the probability of a data breach is reduced.

2. Monitoring outbound traffic can detect anomalous outbound transmissions. If a system is compromised, we ask if there was any sensitive data on the device.
a. No. Use logs (syslog, eventlog, net flow, sensor, firewall, IDS, DLP) to isolate the compromised host and if any external communication has happened. Reinstall/reimage compromised host. Go to step 1.
b. Yes. Run PII search tools like IdentityFinder, Find_SSN to find out how many records were potentially exposed. If the data files were encrypted, the chances of a data breach are minimal, go to step 2a. If PII was in the clear, determine how many unique records were in the file. Go to step 3. 

3. Determine if sensitive data file(s) were exfiltrated from the net. Use network forensics to determine:
a) when was the earliest communication between the attacker and the compromised endpoint. This helps us define the window of exposure.
b) if other internal hosts were accessed from this compromised host. This helps us define the extent of the attack.
c) the probability of sensitive data breach occurring by examining netflow data to and from the compromised host.

Historical network data is used to answer the above questions. That data comes from various sensors each fulfilling a role in CM. The biggest advantage defenders have is the ability to monitor their network traffic. A system whose logs have been wiped can still be monitored by examining network traffic.

4       A Continuous Monitoring Example

How do we detect a suspicious exfiltration? First, you have to establish a “traffic” baseline to see what is considered “normal” traffic.  Baselining provides you with the answer to “where do my organization’s packets go? For example, the chart shown in Figure 1 shows the countries that send and receive packets from a network in a month. The blue bar shows packets that enter the network from a country and the red bar shows packets that leave the network for a particular country.  Once you profile the inbound/outbound traffic, you can do a detailed analysis of the traffic.
Packet traffic within the United States is shown at the bottom of the figure. A possible explanation is the majority of this traffic goes to external search engines. For example, a search engine query for “Randy Marchany” sends a relatively short packet stream to a search engine. The results of the search are usually much greater in size than the original query. Obviously, not all traffic is web based but having this data allows you to do a detailed analysis of your network traffic.



Figure 1. Inbound/outbound network traffic by country

Figure 2 shows a different pattern. It shows a traffic pattern of a large amount of data packets leaving the network for China, Great Britain and Brazil. This pattern doesn’t confirm an exfiltration is happening but we certainly have reason to investigate this traffic further. The analysis confirmed an exfiltration was happening. The incident response team was able to take steps to contain and interrupt the data transfer.



Figure 2. Inbound/outbound traffic by country with anomaly

5       Summary

The traditional network border defense strategy has failed to prevent data breaches. It's time to change our defensive posture from inbound-centric to outbound-centric. Continuous Monitoring allows us to determine if a data exfiltration has happened. CM and network forensics are the difference between a small, internal breach and a major disaster.

 Some good reference books on this topic are "Extrusion Detection: Security Monitoring for Internal Intrusions" by Richard Bejtlich, "Network Forensics"  by Sherri Davidoff and Jonathan Ham, "Applied Network Security Monitoring" by Chris Sanders and Jason Smith.