Lattice is a discrete group of points in space, which can be defined as the set of all integer linear combinations of a certain set of linearly independent vectors b1, . . . , bd known as a basis. A lattice has infinitely many bases, but so-called “reduced” bases, that consist of short and close to orthogonal vectors, are much more interesting. Lattice reduction, the mathematical problem of finding such bases, has a long history which can be traced back to the 18th century, but gained particular prominence after Lenstra, Lenstra and Lovász [LLL82] introduced a polynomial-time approximate algorithm for it in 1982 that became known as LLL. Since the advent of LLL, lattice reduction proved to be a powerful tool for cryptanalysis: early examples include attacks on knapsack-based cryptosystems [Sha82] and Coppersmith’s small root finding algorithm [Cop97] that broke many variants of RSA in particular. This paper focuses on another major cryptanalytic application of lattice reduction: lattice attacks against (EC)DSA (and related signature schemes like Schnorr’s) when bits of the nonce are known. DSA and ECDSA are well-established standards for digital signature based on the discrete logarithm problem, and that involve the use, for each generated signature, of some fresh random value called the nonce. It is well-known that if the same nonce is used twice, the adversary can directly compute the private key due to a linear relation between the nonce and the private signing key. Even worse, partial information about the nonces of multiple signatures can lead to recovery of the full private key. The original approach to do so, due to Bleichenbacher, actually relied on discrete Fourier analysis techniques [Ble00, DHMP13, AFG+14b, TTA18, ANT+20], but lattice reduction was also discovered to provide an attack technique, in connection to Boneh and Venkatesan’s hidden number problem (HNP) [BV96]. HNP is a number theoretic problem that was originally introduced to establish bit security results for the Diffie–Hellman key exchange. Boneh and Venkatesan showed that it could be seen as a bounded distance decoding (BDD) instance in a lattice, which could be solved with Babai’s nearest plane algorithm [Bab86] for suitable parameters. Subsequently, Howgrave-Graham and Smart [HGS01], and later Shparlinski and Nguyen [NS02], observed that the problem of attacking (EC)DSA if some top or bottom bits of the nonces are known is an instance of HNP, and could be attacked using the same lattice techniques. However, when nonce leakage is very small, the attack becomes much more difficult mainly because the hidden lattice vector in BDD is not very close to the target vector. It took significant development in lattice reduction algorithms to advance the state of the art. In 2013, Liu and Nguyen [LN13] were able to attack 160-bit DSA with 2-bit nonce leakage using the BKZ 2.0 algorithm introduced just a few years earlier [CN11], relying on a very high block size of 90, with pruned enumeration as the SVP oracle. In a very recent work [AH21], Albrecht and Heninger utilize the state-of-the-art lattice reduction algorithm G6K [ADH+19] together with the novel idea of predicate sieving to break new records.

## Solving the Hidden Number Problem

In addition to demonstrating the practicality of the side channel, it is important to giveparameters for which the attack performs well. While it is easy to determine the expectednumber of signatures one needs to gather before having enough HNP inequalities, it is lesseasy to determine how many inequalities are needed. Estimates for this number are derivedin Section 4, but this is done after making several simplifying assumptions and does notcapture the actual process of solving the HNP.

This data is collected by simulating theside-channel attack and running the lattice-based HNP solver several times.Both the chosen-plaintext and known-plaintext attacks were tested for orders of size256 and 384, the sizes recommended by NSA Suite B Cryptography. 160 bit and 521 bitorders were also considered in order to capture the behavior at both allowed extremes. Foreach test case, a random prime order of the selected size was generated, and a parameterlwas chosen, roughly corresponding to the number of bits leaked per signature. In thechosen-plaintext attacklwas used to setm=q/2l, and in the known-plaintext attackit setmt=q/2l. To determine how many HNP inequalities to include, which sets thedimension of the lattice, an overhead factorfwas used. Thus for the chosen-plaintextattack,⌈flogql⌉HNP inequalities are generated, and for the known-plaintext attack, thisvalue is⌈flogql+loge⌉. Based on the estimate, the attack is expected to fail forf <1andexpected to succeed forf >1. This formed the instance of the HNP.A reduction method was chosen at random to attempt to solve the instance of theHNP. Specifically, after using the embedding strategy to encode the HNP as a basis for alattice, the FPLLL library used its implementation of either LLL or BKZ to reduce thelattice. Block sizes of 10, 15, 20, and 25 were used for BKZ2. After reduction completed ora timeout of five minutes was reached, the best guess was retrieved and compared to theactualxused to generate the data. The time taken and outcome were also both recorded.Experiments were performed on an Amazon EC2C5.18xlargeinstance.In [BvdPSY14], analysis time is used when considering the performance of a set ofparameters, but in this work, minimizing the number of signatures is prioritized. Thisobjective is inspired by which systems an attacker may wish to target with this attack.If the goal is to recover some long lived server key or smart card secret, the attackercan probably tolerate a couple minutes of offline computation. However, each additionalsignature extends the time of the collection phase and increases the odds that the attackwill be detected. Since collection time scales linearly with the number of signatures,this approach of minimizing signatures is likely to optimize attacks in all but the mosttime-sensitive situations.The experiments reveal that the estimates derived in Section 4 are accurate in mostcases. Figure 1 demonstrates this, as it is clear reduction fails forf <1and succeeds forf >1.3. There is a transition region between1and1.3where the probability of success isin between0and1. Assuming the understanding of the information leaked per sample iscorrect, an ideal reduction algorithm would quickly transition between low probability ofsuccess and high, since as soon as there is enough information, the ideal algorithm wouldbe able to solve the problem. However, LLL and BKZ are not ideal algorithms, and theobserved transition is more gradual.This slope between success and failure becomes more flattened asldecreases, as canbe seen in Figure 2. Forl= 3, the curve appears completely flat, indicating that there is aminimum number of bits that must leak for reduction to be successful. Since the expectednumber of signatures is exponential inl, the optimal value will be whenlis as low aspossible, but not so low that reduction is unlikely for reasonable values off it can be seen that as the BKZ block size increases, so does the success ratio. These resultsindicate that increasing block size improves the reduction algorithm’s ability to recover thehidden number from a limited number of samples; however, increasing block size comes atthe cost of time. This was especially the case for tests of the 384 bit modulus, since thelarger matrices timed out past the five minute cutoff when using a block size of 25.Putting all this together provides the following approximate minimum bounds for a90% success rate, given in Table 2. In general, the chosen-plaintext attack requires fewersignatures total, but the known-plaintext attack can use a smaller value ofl. These signaturecounts are comparable to previous work regarding practical attacks on TLS servers [BT11,İGI+16,Won15] and smart cards [DMHMP13], and so this work demonstrates an attackthat is both practical and powerful for a wide range of key sizes