Lenovo ThinkCentre M50 IDC white paper titled "The Coming of Age of Clien - Page 8

From Des To Aes

Page 8 highlights

For many years, encryption algorithms were quite simple. The offsets used by Mary Queen of Scots relied on the slowness of human decipherers, who were often as much psychologists as mathematicians. If every letter in the encrypted message was, say, five letters up the alphabet from the original (wrapping around again at Z), then decoding one word was enough to break the whole text. A bit more complicated would be incrementing the offset by a fixed amount, which would take a little more doing on the part of the decipherer but would still yield to trial and error. These types of techniques were supplanted by the use of "key texts," a method a notch further up the complexity chain. The offsets to the clear text were determined by the value of the letters of another text, which could be any written document agreed upon by both sender and receiver. The document was usually a book, and the key could start anywhere in the text (say, on the 23rd letter of page 23). This method worked pretty well, except when the encrypted message was intercepted along with one or the other of the concealing parties, at which point the shared secret could be "coaxed" out of the unfortunate soul. These algorithms all depended on the absence of computing power, which in today's world can perform, in a relatively short period, "brute force" trial-anderror sequences that a human could never hope to produce in a lifetime. FROM DES TO AES One of the first big improvements in security came in 1970, when IBM scientists developed the Data Encryption Standard (DES). DES starts with something like offsets but uses complex permutations. The algorithm itself is in the public domain, but, without the key, the result of any particular instance of usage is nearly opaque. Essentially, the clear text is broken into groups or blocks of 64 bits and then transformed using an algorithm dependent on both the message bits and a key, which is 56 bits long (8 bits being reserved for parity check). This stuff is pretty thorough. As a rule of thumb, changing one bit of input in the clear text changes the values of half the output bits in the encoded text. To break this code without the key, a decipherer has to try 256 or 72,057,594,037,927,936 combinations (72 quadrillion, for those intimidated by the sight of large numerals), and because of the dynamism of the DES algorithm, it is extremely difficult to reduce the size of the search space (search-space reduction being one of the more important techniques at the disposal of decipherers) other than by luck. Until the mid-1990s, only the National Security Agency had the computational power to crack a 56-bit DES-encoded message with brute force. In the mid-1990s, commercially available computing reached a level of performance sufficient to break DES in a matter of hours, and privacy seekers started using Triple DES, which essentially runs clear text through the DES washing machine three times, using a different key on each pass. Triple DES was considered quite secure, requiring a code breaker to cover a search space of 2112 combinations. The only reason the search space is not 2168 is that by that time complex cryptoanalytic techniques had been discovered that reduced the maximum search space. Nonetheless, Triple DES represented a reprieve for the existing standard. It would still take all the computers on the Internet more time to crack than the earth is likely to last, not to mention the human race or something as geologically transient as electricity. However, even Triple DES had a couple of major weaknesses. It was a symmetric key encryption method, an Achilles' heel that in some ways makes it no stronger than the old key-text method. The algorithm is called symmetric because the math to encrypt a message is simply run backward to decrypt it. This scheme requires both the sender and recipient to have the same key. Both parties have to share a secret, and they must be able to exchange that secret secretly. And so the possibility exists that clever Internet sniffers or bad men with pointy sticks can extract the secret at either end of the transmission or even in the middle and pop open the message. After all, the key is just a series of numbers, albeit long ones. In addition, because of the These algorithms all depended on the absence of computing power, which in today's world can perform, in a relatively short period, "brute force" trial-and-error sequences that a human could never hope to produce in a lifetime. 8 #3577 ©2003 IDC

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

8
#3577
'2003 IDC
For many years, encryption algorithms were quite simple. The offsets used by Mary
Queen of Scots relied on the slowness of human decipherers, who were often as
much psychologists as mathematicians. If every letter in the encrypted message was,
say, five letters up the alphabet from the original (wrapping around again at Z), then
decoding one word was enough to break the whole text. A bit more complicated
would be incrementing the offset by a fixed amount, which would take a little more
doing on the part of the decipherer but would still yield to trial and error.
These types of techniques were supplanted by the use of "key texts," a method a notch
further up the complexity chain. The offsets to the clear text were determined by the
value of the letters of another text, which could be any written document agreed upon
by both sender and receiver. The document was usually a book, and the key could start
anywhere in the text (say, on the 23rd letter of page 23). This method worked pretty
well, except when the encrypted message was intercepted along with one or the other
of the concealing parties, at which point the shared secret could be "coaxed" out of the
unfortunate soul. These algorithms all depended on the absence of computing power,
which in today’s world can perform, in a relatively short period, "brute force" trial-and-
error sequences that a human could never hope to produce in a lifetime.
FROM DES TO AES
One of the first big improvements in security came in 1970, when IBM scientists
developed the Data Encryption Standard (DES). DES starts with something like
offsets but uses complex permutations. The algorithm itself is in the public domain,
but, without the key, the result of any particular instance of usage is nearly opaque.
Essentially, the clear text is broken into groups or blocks of 64 bits and then
transformed using an algorithm dependent on both the message bits and a key, which
is 56 bits long (8 bits being reserved for parity check). This stuff is pretty thorough. As
a rule of thumb, changing one bit of input in the clear text changes the values of half
the output bits in the encoded text. To break this code without the key, a decipherer
has to try 2
56
or 72,057,594,037,927,936 combinations (72 quadrillion, for those
intimidated by the sight of large numerals), and because of the dynamism of the DES
algorithm, it is extremely difficult to reduce the size of the search space (search-space
reduction being one of the more important techniques at the disposal of decipherers)
other than by luck. Until the mid-1990s, only the National Security Agency had the
computational power to crack a 56-bit DES-encoded message with brute force.
In the mid-1990s, commercially available computing reached a level of performance
sufficient to break DES in a matter of hours, and privacy seekers started using Triple
DES, which essentially runs clear text through the DES washing machine three times,
using a different key on each pass. Triple DES was considered quite secure, requiring
a code breaker to cover a search space of 2
112
combinations. The only reason the
search space is not 2
168
is that by that time complex cryptoanalytic techniques had
been discovered that reduced the maximum search space. Nonetheless, Triple DES
represented a reprieve for the existing standard. It would still take all the computers
on the Internet more time to crack than the earth is likely to last, not to mention the
human race or something as geologically transient as electricity.
However, even Triple DES had a couple of major weaknesses. It was a symmetric
key encryption method, an Achilles’ heel that in some ways makes it no stronger than
the old key-text method. The algorithm is called symmetric because the math to
encrypt a message is simply run backward to decrypt it. This scheme requires both
the sender and recipient to have the same key
.
Both parties have to share a secret,
and they must be able to exchange that secret secretly. And so the possibility exists
that clever Internet sniffers or bad men with pointy sticks can extract the secret at
either end of the transmission or even in the middle and pop open the message. After
all, the key is just a series of numbers, albeit long ones. In addition, because of the
These algorithms
all depended on the
absence of computing
power, which in
today’s world can
perform, in a relatively
short period, "brute
force" trial-and-error
sequences that a
human could never
hope to produce in
a lifetime.