We’ve been hearing a lot of discussion about encryption
these days. The Federal government proposes installing “backdoors” in
encryption algorithms to allow law enforcement and security groups to be able
to monitor communication between entities who pose a threat to “our”
security. We’ll talk more about this
later but we want to emphasize this is an “age old” argument.
Clay Bennet won a Pulitzer Prize in 2002 for an editorial
cartoon that expertly explains the security vs. privacy issue. Imagine a house,
two people inside it and a wooden fence around the house. The house has a label
that says “PRIVACY”. Workmen are removing planks from the house and using them
to build the fence that has a label that says “SECURITY”. Security vs. Privacy is like a see-saw. The
more security you want, the less privacy you have. It is not a “vice versa”
situation. More privacy does not necessarily mean less security. Security
advocates usually say “if you’re not doing anything wrong, then you shouldn’t
be worried”. There are lots of flaws
with this argument. The most common one is “who defines what is the definition
of “wrong”? Does wrong mean “illegal” or dissent, for example. A common
definition of privacy is the “right to be left alone”.
Encryption provides a way to hide something you send or
store from unauthorized entities. It can be as basic as speaking a foreign
language to someone or using something based on high order mathematics. For
example, the Navajo code talkers used their language as an “encryption” method
of communicating without the enemy being able to determine what was being said.
As with any process, it can be used for good or evil. You “break” this encryption technique by using
someone fluent in the language being used.
In the 1990s, the Federal government proposed a method (the
Clipper chip) allowing law enforcement and security groups to decrypt encrypted
information. The resulting uproar was instrumental in shooting this proposal
down but it showed how people didn’t understand how encryption works. The
“clipper chip” was a “backdoor” way to decrypt a file or transmission. Suppose
you put your tax papers in a vault to protect it from unauthorized access. You
use a lock and key to gain access. A backdoor would be something like a master
key for that lock that allows it to be unlocked. Common sense tells us the
master key a) needs to be guarded all the time b) the person who has the master
key isn’t evil and c) the person who has the regular key knows a master key
exists.
So what’s the problem? Well, in the digital world, copies
can be made without the owner’s knowledge. Any good hacker would try to get
that “master” key and use it. It’s folly to assume a digital “master
key/backdoor” would never be compromised. The 2011 RSA hack and 2013 Carbon
Black attack are examples of hackers going after the “master” keys with
success. While the whole purpose of encryption is to protect data at rest and
in transit, there are ways to try to get the data in its original form. Consider the following:
A -> K -> M1 ->C1-> EC -----------à DC -> C2 -> M1
-> file/display -> B
Person A uses a keyboard K to create a message M, stores it
on computer C1 and encrypts it using tool EC. The encrypted message arrives at
the target machine, is decrypted by tool DC running on device C2, the data M is either stored in a file or
shown on the display to person B. The
message is encrypted only from EC on C1 to DC on C2. Attack points where the data could be copied
are at K, C1 C2, M, file/display. Note these attack points do NOT need to know
your encryption key. Why? The data is in the clear when it’s entered at K,
stored in a file M1. If you write a program to grab the data at these points,
you get the data in the clear.
This is nothing new. The first public reporting of this
technique was done in 1998 when the FBI used a keystroke recorder against a
mafia don’s computer. The recorder allowed them to collect information used to
prosecute him. The keystroke recorder copied the data as it was entered before it was encrypted. The 2001 Magic
Lantern tool and the 2009 CIPAV (Computer and Internet Protocol Address
Verifier) were law enforcement tools developed to get data before it was
encrypted.
These were every effective techniques and did not require a
“backdoor” to an encryption algorithm.
So, let’s go back to the Privacy part of this essay. Those
who advocate the “security” argument maintain there’s a need to be able to
determine if criminal intent is planned. If criminals use encryption to hide
intent then the government needs to be able to decrypt those messages. There are ways to get data before it is
encrypted so why the need for a backdoor? We need to remember that a message or
data in a file starts as cleartext. Data capture techniques have been around
for the past 20 years. Since they don’t require a backdoor to the encryption
algorithm, one could assume the real target is privacy. Why? The introduction of backdoors into any
encryption algorithm destroys the algorithm as an encryption tool. The
backdoor(s) will become publicly known eventually and encryption ceases to
exist.
What’s truly ironic about this contention is that
individuals are freely giving up lots of personal information to commercial
companies.
This is a reprint of article originally posted on https://encryption-and-data-loss-protection-solutions.enterprisesecuritymag.com/cxoinsight/encryption-security-and-privacy-oh-my-nid-1455-cid-5.html