Sunday, April 3, 2022

Motorcycle Riding and Being A CISO

 I was checking out some YouTube videos and ran across this one with Michael Jordan, Charles Barkley and Oprah Winfrey. Towards the end of the video, Michael talks about being a defensive driver if you ride a motorcycle. "You have to be really focused and see the traffic ahead", says Jordan.  He then takes a dig at Charles.  Check out https://www.youtube.com/watch?v=t_Q1k2r2yao.

 

I've ridden bicycles almost all of my life and motorcycles for the last third of my life. When I'm on the bike (either type), I am looking ahead to see what traffic patterns are there and trying to anticipate how I can maneuver through those patterns safely and efficiently. My nephew and I used to play a game when he was younger. We'd be in the mall and the challenge was to walk through a crowd from point A to B without missing a step or stopping because someone stepped in front of you. You had to watch the traffic flow and make your best guess on  where and when an opening would occur.

 

This is one of the things a CISO or security architect should practice. You want to look at threat intel, network traffic or attack patterns and chart a course of action based on your past knowledge as well as your ability to guess what will happen next. Sure, sometimes you guess wrong but you use that knowledge to improve your prediction capability. Sound like machine learning? Probably.

 

Next time you ride a bicycle or motorcycle, see if you ride defensively by looking ahead and anticipating the next action that can happen. Take that skill and apply it to designing your security architecture.

Sunday, November 28, 2021

Is Protecting Admin Privs on Endpoints Still Relevant?

 

The post-pandemic WFH (Work From Home)  model should force us to reevaluate the effectiveness of our security architectures. The most common reason for wanting administrative privileges on a device is that the local IT support can't install needed software when it's required by the business. I ask my SANS students how long it takes to install a software package for a business unit. The answers range from 1-2 weeks to 6 or more months because of a software review process. 

Admin privileges on endpoints

I want to emphasize that I'm NOT talking about administrative privileges on Active Directory or some other central management (Kaseya, Solarwinds, etc.)  domain accounts. I'm talking about local accounts and accounts on standalone computers. 

Is the "User having (local/standalone) admin privileges on a computer" as bad a security risk as people say it is? I emphasize the term "local/standalone" admin accounts. I think it is not.  Why? 

  

1) in the old days, having admin privileges on a multi-user system was a big deal. If you were in administrator/root mode and your account got owned, the consequence of that breach would impact ALL of the users on that system.  For large multiuser systems, that could be hundreds to thousands of users.  I understand why there was concern about the administrative/root accounts being secure. For servers that provide a service to multiple remote (to the server) users, it makes sense to restrict admin privileges on the server(s).

  

 2) In today's BYOD world, users are admin/root and general users simultaneously. There usually is only user per device. The impact of an admin/root failure is limited to the individual.   Phishing, ransomware attacks are just as damaging regardless of the entity that triggered the attack being a general user or admin level account.  Smartphones, tablets, etc. have merged the admin and general privilege levels into a single account so it makes no sense to "restrict" admin privileges on those devices today. You can't enforce that.  

  

What about high risk data exposure? Such data exposures can happen in either admin/root or general user mode. For most of the hits I've seen over the years, the damage was done regardless of the privilege level of the account involved.

  

It comes down to training. I've said in a previous blog entry that a poorly trained sysadmin is one of the greatest threats to an organization's data and infrastructure. Organizations should require a minimum amount of training for employees who want administrative privileges on a device. 


Thursday, April 15, 2021

Time to Train -

 "Excuse me, sir. How do I get to Carnegie Hall?"

"Practice, Practice, Practice."


I've always said that a poorly trained sysadmin is one of the greatest threats to any organization's infrastructure. The military training module may seem archaic and cumbersome but it is effective. There is a significant amount of investment in creating an effective training program. I believe the correct technical description is "it ain't cheap".  Organizations that fail to train their technical and general user staff in basic or advanced IT security practices are doomed to suffer multiple failures. 

I'm not going to dive into pedagogy (can't help but giggle everytime I hear that word) or the merits of a good training program. Too much has been said on those topics. Instead, I'm going to present my idea of a training roadmap here:




 Here we have 3 main training tracks:

  • Technical track - the target audiences are system administrators, developers, IT Security analysts/architects. These training programs are designed to enhance your staff's technical knowledge.
  • Awareness track - the target audiences are your general staff, management. These training programs are designed to make your workforce aware of the laws, regulations, best practices for handling your organization's sensitive data. In addition, these programs show your staff the different types of physical and cyber attacks they may see and how to respond to these threats.
  • User (How-to) track - this training program teaches your staff how to use the day to day tools of your business. It covers things like how to:
    • use Microsoft Office, Adobe Acrobat tools
    • use graphical design tools
    • use collaboration tools
    • use in-house tools
    • use external software or hardware products.
There needs to be a blend of externally developed training materials (SANS Secure the Human, Skillsetsonline, LinkedIn Learning, etc.) and "local" training for in-house applications.

Take a look at the above roadmap and I would like to hear your suggestions on how to improve or implement the roadmap.


Saturday, January 23, 2021

Resilience Is the Key to a Successful Defensive Strategy


The main mission of any CISO is not to prevent breaches of their infrastructure, rather, it's to safeguard you organizations' sensitive data and identity. I've said many time in the past that there are no device breach notifications but there are plenty of data breach notification laws. There are many ways to protect data and identity like encryption, monitoring outbound traffic, increasing user awareness, multi-factor authentication. These are important things but they are a means to achieve a goal. Resilience is the key to a sound defensive strategy. Here are some thoughts.

  1. We play defense not offense. 95% of companies hire cybersecurity people to defend their company from cyberattacks. They don't hire them to attack other sites. That's what the remaining 5% do. However, to play good defense, one must know how to play good offense. In other words, a Blue Team should have strong Red Team skills.
  2. One must accept the fact that a breach will happen regardless of whatever controls are in place. The old defensive strategy of building a "wall" to keep the bad guys out has failed. While there are many variants of the now popular Zero Trust Network philosophy, there are 2 key points that must be in place:
    1. The network is hostile.
    2. Data and identity are the new borders
  3. The key to a successful defensive strategy is resilience not prevention.  A sound resilience strategy is key to recovery.

Resilience 

I could give the Webster's dictionary definition of resilience but let me give you an example.

Ransomware is one of the destructive attacks that has affected a large number of organizations and people recently.  It's been around since 1989 but what made it popular was the introduction of cryptocurrency as the payment mechanism. For example, the Virginia State prescription monitoring database was hit with a ransomware attack in 2009 and the attackers demanded a $10M ransom. The state didn't pay and restored from backups. There was a disruption of service, some loss of data but the service recovered. Collecting the $10M in small bills requires a bunch of duffel bags and every LEO in the planet watching those bags to see who collects them. 

This incident convinced me that the best defense against ransomware attacks is not "prevention", rather, it is "recovery". Take the time to carefully align file permissions with need-to-access requirements of the business. This is a difficult step. It may limit ransomware damage by limiting the files the malware can access.
 
A good backup strategy is the best defense in this case. A system gets hit with ransomware, you wipe it, patch it, restore your data from good backups and then move on with your business. A good resilience strategy should include these steps:

  • find your sensitive data. Consolidate it into something like a data lake. 
  • Map where your sensitive data goes within your network borders as well as outside your borders. 
  • Backup this data lake by taking snapshots, doing old school incremental backups and store the backups offline in a read-only mode. For example, NetApp devices allow the creation of a read-only snapshot.
  • Test your recovery processes frequently.

The old RFC 1244 "Site Security Handbook" describes two defensive strategies: "Protect and Proceed" and "Pursue and Prosecute".  It set the following conditions for each of these approaches:
 
Protect and Proceed
      1. If assets are not well protected.
      2. If continued penetration could result in great
         financial risk.
      3. If the possibility or willingness to prosecute
         is not present.
      4. If user base is unknown.
      5. If users are unsophisticated and their work is
         vulnerable.
      6. If the site is vulnerable to lawsuits from users, e.g.,
         if their resources are undermined.
   Pursue and Prosecute
      1. If assets and systems are well protected.
      2. If good backups are available.
      3. If the risk to the assets is outweighed by the
         disruption caused by the present and possibly future
         penetrations.
      4. If this is a concentrated attack occurring with great
         frequency and intensity.
      5. If the site has a natural attraction to intruders, and
         consequently regularly attracts intruders.
      6. If the site is willing to incur the financial (or other)
         risk to assets by allowing the penetrator continue.
      7. If intruder access can be controlled.
      8. If the monitoring tools are sufficiently well-developed
         to make the pursuit worthwhile.
      9. If the support staff is sufficiently clever and knowledgable
         about the operating system, related utilities, and systems
         to make the pursuit worthwhile.
      10. If there is willingness on the part of management to
          prosecute.

                                             Figure 1. Protect and Proceed vs Pursue and Prosecute

My ransomware scenario's recovery process is an implementation of requirement listed in RFC 1244 - " "Attempts will be made to actively interfere with the intruder's processes, prevent further access and begin immediate damage assessment and recovery." 

This is an example of resilience. Andy Greenberg's book "Sandworm" has a chapter dedicated to resilience. Dan Geer's essay "A Rubicon" is another example of the importance of resilience. Creating an "parallel" network universe addresses interdependency issues and allows for a quick recovery.

We should certainly spend funds on detection tools but the bulk of present-day defenses should be focused on how we recover from an attack. Resilience processes such as backups,  monitoring and disrupting outbound traffic to questionable sites are examples of a good resilience strategy. 

You're going to get breached at some point in time. How fast you recover can limit the damage done to your business processes. 

References


https://assets.documentcloud.org/documents/4366740/Geer-Webready-Updated.pdf
https://tools.ietf.org/html/rfc1244
https://mysupport.netapp.com/NOW/public/eseries/sam_archive1150/index.html#page/GUID-8538272A-B802-49D9-9EA2-96C82DAD26A2/GUID-F6C0C512-F196-4008-97AE-EA06EE4D32F6.html

Monday, August 31, 2020

RDP Security Tip and other Infographics

 Thanks to Thomas Roccia for this great resource he created. It's at https://medium.com/@tom_rock/security-infographics-9c4d3bd891ef. I think you'll find these graphs to be particularly useful in any presentation you do. 

We've been asked a lot about Remote Desktop security given the WorkFromHome (WFH) situation we're in during the pandemic. It is a serious problem and here's a great infographic from Thomas' site. 



Saturday, August 8, 2020

Academic Freedom and IT Security - They Do Work Well Together

 I was a member of a panel on Cyber Hygiene that was sponsored by the SANS Institute today. My good buddies, Tony Sager and Russell Eubanks were also on the panel. 

An attendee asked me about the challenge of balancing IT Security practices vs. the cherished Academic Freedom (AF) issue. I responded that IT has to stop being the Department of NO and go out and listen and learn how researchers do their thing. Only then should they decide on a path that supports rather than hinders their research. It's harder to take the time to meet and learn how end users actually do things  given the multitude of tasks most IT people need to perform in their normal course of duties. Understanding how and why your end users do things allows you to design and build a more efficient IT Security program and architecture. Short term pain eventually leads to long term gain. Taking the time to understand how your end users actually use your IT services will actually lessen the amount of time you have to spend outside of your normal duties in the long term. 

It was a great question and it got me thinking about the issue a little more and hence, this blog entry. I've been working in EDU IT for 45 years now and here are some musings on this balancing challenge.

I went on a motorcycle ride and got to thinking more about the question while I was riding through the mountains. It occurred to me that there should be no conflict between IT security and AF principles.  IT Security practices should enhance and protect AF. One complements the other. 

First, let's try to define "academic freedom" for the purpose of this blog. Here are some definitions that I'll use as my foundation. Academic Freedom is defined as:

1. a scholar's freedom to express ideas without risk of official interference or professional disadvantage. "we cannot protect academic freedom by denying others the right to an opposing view" (Oxford Dictionary)

2. Academic freedom means that both faculty members and students can engage in intellectual debate without fear of censorship or retaliation. (https://www.insidehighered.com/views/2010/12/21/defining-academic-freedom)

3. Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties. Teachers are entitled to freedom in the classroom in discussing their subject, but they should be careful not to introduce into their teaching controversial matter that has no relation to their subject. (https://www.aaup.org/issues/academic-freedom/professors-and-institutions)

After reading these definitions, I tried to see what the conflict was between IT practices and Academic Freedom (AF). Frankly, I saw more opportunities for IT practices to support, secure and protect AF. All 3 of the above definitions emphasize the right of the academic community to discuss freely any topic without the fear of censorship or retaliation. Looking at this from the IT Security point of view, here are some threat scenarios to AF in the online world.  A sample threat would be attacks against the Confidentiality, Integrity and Availability (CIA) aspects of AF.

For example, let's look at censorship. DOS/DDOS attacks,  domain blocking, confiscation of servers or endpoints are examples of availability attacks. Unauthorized modification of topics/data is an example of an integrity attack. Doxing is an example of a confidentiality attack. 

There are existing IT Security practices that can mitigate the effects of these classes of attacks.  Availability threats such as DOS/DDOS attacks can be deflected. Domain blocking can be addressed. Good file permission strategies along with good backups, file integrity tools can mitigate integrity attacks. Hunting down doxxers, online "bullies" can be done using techniques such as OSINT and log analysis to protect  individuals from harassment or retaliation.

Sound IT Security practices can and should be done to further advance academic freedom. I think the supposed conflict between IT Security and AF is not the big issue everyone outside of the EDU world thinks it is. 

To the webinar attendee who asked me the question of balancing IT Security practices with Academic Freedom, let me say IT Security practices should support academic freedom by designing procedures for protecting one's right to academic freedom. It should never interfere with that core business process.

This is my short answer to this question. I'd like to hear your opinions on this matter.

8/8/2020

Friday, August 7, 2020

Encryption, Security and Privacy, Oh My!


We’ve been hearing a lot of discussion about encryption these days. The Federal government proposes installing “backdoors” in encryption algorithms to allow law enforcement and security groups to be able to monitor communication between entities who pose a threat to “our” security.  We’ll talk more about this later but we want to emphasize this is an “age old” argument.
Clay Bennet won a Pulitzer Prize in 2002 for an editorial cartoon that expertly explains the security vs. privacy issue. Imagine a house, two people inside it and a wooden fence around the house. The house has a label that says “PRIVACY”. Workmen are removing planks from the house and using them to build the fence that has a label that says “SECURITY”.  Security vs. Privacy is like a see-saw. The more security you want, the less privacy you have. It is not a “vice versa” situation. More privacy does not necessarily mean less security. Security advocates usually say “if you’re not doing anything wrong, then you shouldn’t be worried”.  There are lots of flaws with this argument. The most common one is “who defines what is the definition of “wrong”? Does wrong mean “illegal” or dissent, for example. A common definition of privacy is the “right to be left alone”.
Encryption provides a way to hide something you send or store from unauthorized entities. It can be as basic as speaking a foreign language to someone or using something based on high order mathematics. For example, the Navajo code talkers used their language as an “encryption” method of communicating without the enemy being able to determine what was being said. As with any process, it can be used for good or evil.  You “break” this encryption technique by using someone fluent in the language being used.

In the 1990s, the Federal government proposed a method (the Clipper chip) allowing law enforcement and security groups to decrypt encrypted information. The resulting uproar was instrumental in shooting this proposal down but it showed how people didn’t understand how encryption works. The “clipper chip” was a “backdoor” way to decrypt a file or transmission. Suppose you put your tax papers in a vault to protect it from unauthorized access. You use a lock and key to gain access. A backdoor would be something like a master key for that lock that allows it to be unlocked. Common sense tells us the master key a) needs to be guarded all the time b) the person who has the master key isn’t evil and c) the person who has the regular key knows a master key exists.

So what’s the problem? Well, in the digital world, copies can be made without the owner’s knowledge. Any good hacker would try to get that “master” key and use it. It’s folly to assume a digital “master key/backdoor” would never be compromised. The 2011 RSA hack and 2013 Carbon Black attack are examples of hackers going after the “master” keys with success. While the whole purpose of encryption is to protect data at rest and in transit, there are ways to try to get the data in its original form.  Consider the following:

A -> K -> M1  ->C1-> EC -----------à DC -> C2 -> M1 -> file/display -> B

Person A uses a keyboard K to create a message M, stores it on computer C1 and encrypts it using tool EC. The encrypted message arrives at the target machine, is decrypted by tool DC running on device C2,  the data M is either stored in a file or shown on the display to person B.  The message is encrypted only from EC on C1 to DC on C2.  Attack points where the data could be copied are at K, C1 C2, M, file/display. Note these attack points do NOT need to know your encryption key. Why? The data is in the clear when it’s entered at K, stored in a file M1. If you write a program to grab the data at these points, you get the data in the clear.

This is nothing new. The first public reporting of this technique was done in 1998 when the FBI used a keystroke recorder against a mafia don’s computer. The recorder allowed them to collect information used to prosecute him. The keystroke recorder copied the data as it was entered before it was encrypted. The 2001 Magic Lantern tool and the 2009 CIPAV (Computer and Internet Protocol Address Verifier) were law enforcement tools developed to get data before it was encrypted.
These were every effective techniques and did not require a “backdoor” to an encryption algorithm.

So, let’s go back to the Privacy part of this essay. Those who advocate the “security” argument maintain there’s a need to be able to determine if criminal intent is planned. If criminals use encryption to hide intent then the government needs to be able to decrypt those messages.  There are ways to get data before it is encrypted so why the need for a backdoor? We need to remember that a message or data in a file starts as cleartext. Data capture techniques have been around for the past 20 years. Since they don’t require a backdoor to the encryption algorithm, one could assume the real target is privacy.  Why? The introduction of backdoors into any encryption algorithm destroys the algorithm as an encryption tool. The backdoor(s) will become publicly known eventually and encryption ceases to exist.

What’s truly ironic about this contention is that individuals are freely giving up lots of personal information to commercial companies.


This is a reprint of article originally posted on https://encryption-and-data-loss-protection-solutions.enterprisesecuritymag.com/cxoinsight/encryption-security-and-privacy-oh-my-nid-1455-cid-5.html