News

Are We Secure Against Post-Quantum Cyber-Attacks?

  • Interdisciplinary Centre for Security, Reliability and Trust (SnT)
    29 octobre 2021
  • Catégorie
    Recherche

How safe is your data? Currently, most computer and mobile device users benefit from a generally high level of security, mainly since our data and communications – such as those processed in databases, payments, cloud storage, email and many other applications – are, for the most part, encrypted. But the current encryption techniques might soon become obsolete. So soon, in fact, that the New Scientist recently reported* that intelligence agencies might already be intercepting and storing encrypted data to decrypt in the very near future. The era of post-quantum cryptography is upon us, and its possibilities are unsettling.

The risk derives from the quick development of quantum computing, a technology that has been around since the 90s, at least in experimental form, and that is quickly gaining commercial pace. Instead of using bits to compute, quantum computers use qubits – tiny units of information that not only can represent the finite states 1 and 0 (as bits currently do), but also both states at the same time. This means that quantum computers, once they will be viable, will feature computational capabilities that are orders of magnitude more powerful than those of normal computers.

For close to two decades, researchers and institutions alike have been working on the security challenges that this technology is about to introduce, particularly regarding cryptography. To make an example, currently most of our emails and messages are encoded into unintelligible strings based on complex cryptographic algorithms – mathematical functions that transform the information to conceal it from anybody else but the recipient. In the same way that they have been designed, however, these algorithms can also be broken, yet the vast amount of time that would require with our current technology makes it unfeasible. And that is the crux of the matter: with their greatly increased computational power, quantum computers will be able to easily decode some of today’s encryption ciphers, especially those based on public-key protocols.  

As we hover closer to the commercial deployment of quantum computers, it becomes even more essential to establish new, more powerful algorithms that will continue to ensure the privacy and safety we currently enjoy. It is therefore only natural that, in addition to researchers, institutions have been looking into identifying robust quantum-resistant algorithms for many years. Among them the U.S. National Institute of Standard and Technology (NIST), which in 2016 has initiated a process to solicit, evaluate, and standardise one or more quantum-resistant public-key cryptographic algorithms. By the time the deadline elapsed, in 2017, several candidates had been submitted. Since then, they are being tested and evaluated by NIST as well as by researchers worldwide.

One of these candidates is the Supersingular Isogeny Key Encapsulation (SIKE), a suite to exchange keys needed to encrypt and decrypt information. SIKE has been submitted to NIST in 2017, having been developed for years prior by an international group of academic and industry researchers in cryptography. 

In order to foster cryptanalysis on SIKE, Microsoft launched a challenge in June 2021 in which they asked the cryptography community to try and break two instances of SIKE with “toy” security parameters. In particular, Microsoft proposed values for this instance that are lower than those proposed for standardisation at NIST. Gauging which level of security is necessary is very important, because encrypting information has a very high cost – both in terms of power and time. The company offered an award of 5,000 $ for breaking the smaller instance of the parameters, and 50,000 $ for the bigger instance. 

In August 2021, a doctoral candidate at the University of Luxembourg, Giuseppe Vitto, and alumnus Aleksei Udovenko, Cryptography Expert at CryptoExperts, broke the smaller instance and were awarded the prize of 5,000 $. Crucially, though research on these algorithms is priceless, the prizes are purposedly kept low to attract new and innovative methods in cryptanalysis. “The challenge was to break the algorithm in this configuration not with sheer computational power – which was possible, but very expensive – but by developing new mathematical ideas. If anybody managed, of course, it meant the parameters needed to be revised,” says Vitto, a member of the CryptoLUX research group at SnT.

“Our solution is based on an approach similar to the six degrees of separation – the idea that any two people on Earth are six connections away from each other. To make an easy comparison, we could say that the problem we were trying to solve was: find the chain of connections that ties together any two strangers in the world. The computation is exponential: we start looking at each person’s friends to find that central connection between the two initial subjects. Only that instead of six degrees, we worked on a chain of 91 degrees in a world with a population of trillions of quadrillions” says Vitto.

Their solution follows one of the two classical methods to break suites like SIKE – an approach called meet-in-the-middle, which presents a big problem right from the start: due to its memory requirements, it’s simply not feasible for real instances of SIKE. “Researchers realised that storing and managing the data required to attack SIKE with the meet-in-the-middle approach – as we go back and forth, so to speak, between each person’s contacts – quickly becomes unfeasible due to the huge amount of memory needed to complete it,” says Udovenko. So, most researchers tried to break the algorithm by using the other classical approach – the van Oorschot-Wiener approach.

In fact, also in June, one of SIKE’s authors, Luca De Feo – presenting at NIST’s Third PQC Standardization Conference about SIKE in general (not about the toy parameter instance) – stated that “[The] Best attack is the generic van Oorschot-Wiener (vOW) parallel collision finding algorithm”.

“Or so they thought,” says Vitto, “because the vOW approach requires less memory, and memory access – and the latency it causes – is always a bottleneck”. That is, in fact, for a simple reason: ideally, cryptanalysts would use RAM memory, which is much quicker than even the quickest SSD hard disks. To make an easy comparison with an internet browser, using RAM is like keeping 20 tabs open at the same time – and using SSD is like manually going through your list of bookmarks to open each website one by one. Solving this challenge with meet-in-the-middle needed many Terabytes of memory. 

But Udovenko and Vitto found a simple and elegant solution to solve the memory latency problem when using random access patterns – they sorted the data, and compared them in a sequential manner, rather than randomly (as meet-in-the-middle usually prescribes). That allowed them to drastically cut down storage requirements, which in turn made it feasible to use SSD memory instead of RAM, and, together with other optimisations, break the algorithm and win the challenge. The attack was carried out using the HPC facilities of the University of Luxembourg, and required less than 10 core-years and 256 Terabytes of high performance network storage.

The team has just published a paper on the attack to explain their method in detail, and they were awarded at the Eurocrypt 2021 conference, which took place on the 17-21 October. One question remains open: will they participate in the bigger challenge? “We can’t use the method we’ve used in this instance for the bigger challenge, because our computational resources do not really scale” they say, “but that doesn’t mean that we won’t try harder”.

  

*New Scientist, 12 October 2021.