Topic : How the RSA Cipher Works
Author : Brian Raiter
Page : << Previous 5  Next >>
Go to page :


31^(2^2) * 31^(2^0)
        = 31^(2 * 2 * 2 * 2) * 31^(2 * 2 * 2) * 31^(2 * 2) * 31
        = (((31^2)^2)^2)^2 * ((31^2)^2)^2 * (31^2)^2 * 31.


This may look like anything but an improvement, at first. But on a closer examination, you'll see that we actually have many repeated subterms. This simplifies matters, particularly when we take advantage of the fact that we are calculating in modulo 35.

We compute the first square in modulus arithmetic:

  31^2 = 961 = 16 (mod 35).
By substituting this value into our equation:

  31^29 = (((31^2)^2)^2)^2 * ((31^2)^2)^2 * (31^2)^2 * 31 (mod 35),
we get:

  31^29 = ((16^2)^2)^2 * (16^2)^2 * 16^2 * 31 (mod 35).
Now by computing that square:

  16^2 = 256 = 11 (mod 35),
we will have:

  31^29 = (11^2)^2 * 11^2 * 11 * 31 (mod 35).
We keep simplifying in the same fashion:

  11^2 = 121 = 16 (mod 35), and
  16^2 = 256 = 11 (mod 35),
and so:

  31^29 = 16^2 * 16 * 11 * 31 (mod 35)
        = 11 * 16 * 11 * 31 (mod 35).


We can continue to take advantage of the modulus when we do the final multiplications:

  31^29 = 11 * 16 * 11 * 31 (mod 35)
        = 11 * 16 * 341 (mod 35)
        = 11 * 16 * 26 (mod 35)
        = 11 * 416 (mod 35)
        = 11 * 31 (mod 35)
        = 341 (mod 35)
        = 26 (mod 35).


Lo and behold: 26, the same result as when we did it the hard way.

This binary technique is really no different than how computers normally compute integer powers. However, the fact that we can break the process down to successive multiplications allows us to apply the modulus at every step of the way. This assures us that at no point will our algorithm have to handle a number larger than (R - 1)^2.


Safety in Numbers
Okay. So we know that the process of encryption and decryption is still practical, even if R is immense. But all of this is moot unless we can still generate the keys. If we want R to be so big that it can't be factored easily, how are we going to find it in the first place?

Well, it turns out that there is an interesting little asymmetry here. There happen to be relatively cheap ways to test a number for primality. One of the most famous is based on what is called Fermat's Little Theorem. Here is the version of this Theorem that we're interested in:


If P is prime, then N^(P - 1) = 1 (mod P) is true for every number N < P.
(If this seems suspiciously reminiscent of Euler's Totient Theorem, it should. Euler was the first person to publish a proof of this Theorem, and his Totient Theorem is a generalization of Fermat's. You can see this for yourself by remembering that phi(P) = P - 1 when P is prime.)

Of course, from a mathematician's point of view, this theorem is mainly useful for proving that a given number is composite. For example, it just so happens that:

  4^14 (mod 15) = 268435456 (mod 15) = 1,
even though 15 is no prime. Nonetheless, it is also true that:

  3^14 (mod 15) = 4782969 (mod 15) = 9, and
  5^14 (mod 15) = 6103515625 (mod 15) = 10.
On the other hand, 17, which is prime, results in 1 every time:

  3^16 (mod 17) = 43046721 (mod 17) = 1,
  4^16 (mod 17) = 4294967296 (mod 17) = 1,
  5^16 (mod 17) = 152587890625 (mod 17) = 1, and so on.
So, if we want to know if a number is prime, we can run it through this test, using (say) 2 as the base. If anything besides 1 results, we know with certainty that the number is composite. If the answer is 1, however, we try the test again with 3, and then 4, and so on. If we keep getting back 1 as the result, it soon becomes astronomically unlikely that the number is anything but prime. This doesn't constitute proof, mathematically speaking, but it certainly works for our purposes.

There are other tests for primality, some of which are faster. But they all have at least one thing in common with this test: When they reject a number, it tells us only that the number can be factored. The test results give us no information at all as to what the factors might be. How unfortunate!

Unfortunate for the mathematicians, that is. Very fortunate for us.

The basic truth is that, in order to find the factors of a composite number, we're pretty much stuck with using brute force: Divide the number by all the primes you can get your hands on until one of them goes in evenly. There are ways to improve on this approach (the Number Field Sieve currently being the best), but they are complicated, and all they do is allow you to narrow the scope of the search. They don't reduce the search enough to make this problem tractable in general.

Nor is it likely that new approaches will, either! The real issue is that the encrypting and decrypting algorithms have a running time that is linear with respect to the length of R. That is to say, doubling the number of digits in R doubles the amount of time (roughly) needed to encrypt, decrypt, and to select the two primes to make a key with. But the algorithms for factoring R have a running time that is exponential with respect to the length of R. That is to say, the time (roughly) doubles with every few digits! (Because every digit added to R makes it ten times larger, and thus multiplies the number of potential candidates for its measly two factors.)

So if a new technique is suddenly found that makes it a trillion times faster to factor a number, all we have to do is increase the size of R we use by enough digits, and the situation will be right back where it started -- and all it means to us is that it takes a little bit longer to send and receive our messages. Already some people are using keys that, in order to factor with the Number Field Sieve, would require more energy than exists in the known universe.

An illustration: To the best of my knowledge, the number used as the modulus for the RSA-140 challenge is the largest general number than has been independently factored. (By general, I'm excluding creatures like Mersenne numbers and Fermat numbers, which have specialized factoring techniques that are inapplicable elsewhere.) It was completed on February 2, 1999. Now, the record previous to this was the RSA-130 number, and the process of factoring it was estimated as taking a total of 1000 MIPS-years of computer time. RSA-140, a number only 10 decimal digits longer, required twice that amount.

This, finally, is the heart of what makes RSA a trapdoor function: the gap between obtaining a number with two prime factors, and rediscovering the factors from the number itself. And the gap just keeps expanding as the numbers get larger.

The breakthrough that would completely destroy RSA's security would be an algorithm that actually produced a number's factors directly, instead of merely narrowing the search's scope. Such a thing has not been proven impossible, and it would probably be very hard to do so. But considering the renewed attention that has been focused on this problem in the last two decades, the likelihood of the existence of such an algorithm is once again starting to appear quite remote. Discovering one would change the face of number theory as much as RSA has changed the face of cryptography.

However -- if this were to happen, there are other trapdoor functions out there, waiting to be found. Whatever the future of RSA may be, the trapdoor cipher has certainly changed the face of cryptography forever.





References
1. Clawson, Calvin C.: "Mathematical Mysteries", 1996, Plenum Press. (Clawson devotes an entire chapter to the mathematics behind RSA, and it is this that gave me the inspiration to create this text.)

2. Gardner, Martin: "Penrose Tiles to Trapdoor Ciphers", 1989, W.H. Freeman & Co. (This is another anthology

Page : << Previous 5  Next >>