Abstract | U ovom je radu prikazano šest algoritama, od kojih su tri testovi prostosti, a tri faktorizacijski algoritmi. Faktorizacijski algoritmi su oni koji za dani broj n pronalaze neki rastav broja n u smislu osnovnog teorema aritmetike (Teorem 1). Testovi prostosti su algoritmi koji za dani broj n odgovaraju na pitanje je li n prost. Iako se i faktorizacijski algoritmi mogu koristiti u tu svrhu, u praksi to nije uobičajeno. Naime, puno lakše je odrediti složenost broja nego dati konkretnu faktorizaciju, u smislu vremenske složenosti. Dok su najbrže faktorizacijske metode subeksponencijalne složenosti, testovi prostosti su polinomni. Zato se faktorizacijske metode koriste kao testovi prostosti samo za male brojeve. Testovi prostosti koji su obrađeni su: • Fermatov test koji je osnova mnogih testova prostosti koji se danas koriste • Miller–Rabinov test koji je predstavnik vjerojatnosnih testova prostosti • algoritam AKS kao jedini poznati polinomni test prostosti. U praksi se najviše koriste upravo algoritmi poput Miller–Rabinovog testa, koji imaju mogućnost pogreške, ali se ona može proizvoljno smanjiti. Algoritam AKS je još uvijek najviše od teorijskog značaja. Od faktorizacijskih algoritama predstavljeni su neki od danas najkorištenijih algoritama koji su svi subeksponencijalne složenosti: • metoda kvadratnog sita (QS) • metoda sita poljem brojeva (NFS) • metoda eliptičkih krivulja (ECM). Algoritmi koji, poput ECM, ovise o veličini najmanjeg prostog faktora broja kojeg želimo faktorizirati zovu se algoritmi prve kategorije. Algoritmi QS i NFS ovise isključivo o veličini samog broja kojeg želimo faktorizirati i zovu se algoritmi druge kategorije. Za svaki od algoritama objašnjen je princip na kojem se zasnivaju, te osnovni pojmovi potrebni za njihovo razumijevanje. |
Abstract (english) | In this paper six algorithms are presented: three primality tests and three factorization algorithms. Factorization algorithms (methods) are those algorithms that give some decomposition of a given integer n in the sense of Fundamental theorem of arithmetic (1). Primality test are those algorithms that, for a given positive integer n, decide whether n is prime or composite. Although factorization algorithms can also be used for that purpose, it is not common to do so. If they are used for testing, they are only useful for small numbers, because of their complexity. Fastest known factorization methods, as one can see in this paper, are of subexponential time complexity, while primality tests are polynomial. One might find this logical, since it is far easier to decide whether a number is prime, than to give a concrete factorization. Primality tests discussed here are: • Fermat test, which is a corner stone of many primality tests, • Miller–Rabin test, which represents nondeterministic, but fast primality tests, • algorithm AKS, which is so far the only known polynomial algorithm. In practice, AKS usually gives way to algorithms such as Miller–Rabin test, because it is faster, and its nondeterministic component can be arbitrarily reduced to satisfyingly small probability of mistake. Therefore, the importance of AKS is, for now, mostly theoretical. Factorization methods chosen to be represented are: • quadratic sieve method (QS), • number field sieve (NFS), • elliptic curve method (ECM). It is stated here that the time complexity of ECM depends strongly on the size of the least prime factor of the number it is trying to factor. Algorithms with this feature are called first category algorithms. Complexity of QS and NFS depends solely on the size of the number which they are trying to factor. Such algorithms are called Second Category or Kraitchik family algorithms. All three algorithms are of subexponential time complexity, which means that they are faster than exponential algorithms, but slower than any polynomial algorithm (of course, asymptotically). For each algorithm discussed in this paper, the principle on which it is based is explained, as are the basic concepts necessary for understanding it. |