Modular arithmetic

With repeated addition or multiplication, numbers can become frustratingly large, which is why the hours of the day reset every 24 hours, and the months reset every year. Similarly, for built-in arithmetic operations of computer programs, number are restricted to a size, frequently 32-bits, which is considered enough for most purposes.

Modular arithmetic is a system for dealing with restricted ranges of integers.

<aside> 📘 We define $x\pmod{N}$ to be the remainder when $X$ is divided by $N$. If $x=qN+r, 0\leq r< N$, then $x\pmod{N}\equiv r$.

$x$ and $y$ are congruent $\mod N$ if they differ by a multiple of $N$, or in symbols, $x\equiv y\pmod{N}\Leftrightarrow N\text{ divides }(x-y)$.

</aside>

Another interpretation is modular arithmetic divides integers into $N$ equivalence classes, each of the form $\{i+kN: k\in \mathbb Z\}$ for some $i$ between $0$ and $N-1$. For example, there are three equivalence classes modulo 3:

$$ \cdots,-9,-6,-3,0,3,6,9,\cdots\\\cdots,-8,-5,-2,1,4,7,10,\cdots\\\cdots,-7,-4,-1,2,5,8,11,\cdots $$

A member of an equivalence class is substitutable for any other — when viewed modulo 3, the numbers 5 and 11 are no different. Under such substitutions, addition and multiplication remain well-defined.

<aside> 💡 If $x\equiv x'\pmod N$ and $y\equiv y'\pmod N$, then $x+y\equiv x'+y'\pmod n$ and $xy\equiv x'y'\pmod N$.

</aside>

It is not hard to check that in modular arithmetic, the usual associative, commutative and distributive properties of addition and multiplication continue to apply.

Taken together with the substitution rule, this implies when performing a sequence of arithmetic operations, it is legal to reduce intermediate results to their remainders modulo $N$ at any stage, and can dramatically help in big calculations. For example:

$$ 2^{345} \equiv(2^5)^{69}\equiv32^{69}\equiv1^{69}\equiv 1\pmod{31}. $$

Modular addition and multiplication

To add two numbers $x$ and $y$ modulo $N$, we start with regular addition. Since $x$ and $y$ are each in the range of $0$ to $N-1$, their sum is between $0$ and $2(N-1)$. If the sum exceeds $N-1$, we merely subtracted off $N$. The overall computation therefore consists of an addition, and possibly a subtraction, of numbers that never exceed $2N$. Its running time is linear in the sizes of these numbers, in other words, $O(n)$, where $n=\lceil \log N\rceil$ is the size of $N$; as a reminder our convention is to use the letter $n$ to denote input size.

To multiply two numbers $x$ and $y$ modulo $N$, we again start with regular multiplication and then reduce the answer modulo $N$. The product is in the range of $0$ and $(N-1)^2$, but this is still at most $2n$ bits long, since $\log(N-1)^2=2\log(N-1)\leq 2n$. To reduce the answer modulo $N$, we compute the remainder upon dividing it by $N$, using our quadratic-time division algorithm. Multiplifcation thus remains a quadratic operation.

Division is not quite so easy. In ordinary arithmetic, there is just one tricky case — division by zero. In modular arithmetic, there are potentially other such cases as well, which will be discussed later. Whenever division is legal, however, it can be managed in cubic time, $O(n^3)$.

Modular exponentiation

In cryptosystems, it is often necessary to quickly compute $x^y \pmod N$ for values of $x$, $y$, and $N$ several hundred bits long. Is there an efficient method to do so?

To make sure the numbers we are dealing with are never too large, we need to perform all intermediate computations modulo $N$. Calculate $x^y \pmod N$ by repeatedly multiplying $x$ modulo $N$. The resulting sequence of intermediate products,

$$ x \pmod N\rightarrow x^2\pmod N\rightarrow x^3\pmod N\cdots\rightarrow x^y\pmod N, $$