Extending the limits for big data RSA cracking: Towards cache-oblivious TU decomposition

dc.contributor.authorAbu Salem, Fatima K.
dc.contributor.authorAl Arab, Mira
dc.contributor.authorYang, Laurence Tianruo
dc.contributor.departmentDepartment of Computer Science
dc.contributor.facultyFaculty of Arts and Sciences (FAS)
dc.contributor.institutionAmerican University of Beirut
dc.date.accessioned2025-01-24T11:22:58Z
dc.date.available2025-01-24T11:22:58Z
dc.date.issued2020
dc.description.abstractNowadays, Big Data security processes require mining large amounts of content that was traditionally not typically used for security analysis in the past. The RSA algorithm has become the de facto standard for encryption, especially for data sent over the internet. RSA takes its security from the hardness of the Integer Factorisation Problem. As the size of the modulus of an RSA key grows with the number of bytes to be encrypted, the corresponding linear system to be solved in the adversary integer factorisation algorithm also grows. In the age of big data this makes it compelling to redesign linear solvers over finite fields so that they exploit the memory hierarchy. To this end, we examine several matrix layouts based on space-filling curves that allow for a cache-oblivious adaptation of parallel TU decomposition for rectangular matrices over finite fields. The TU algorithm of Dumas and Roche (2002) requires index conversion routines for which the cost to encode and decode the chosen curve is significant. Using a detailed analysis of the number of bit operations required for the encoding and decoding procedures, and filtering the cost of lookup tables that represent the recursive decomposition of the Hilbert curve, we show that the Morton-hybrid order incurs the least cost for index conversion routines that are required throughout the matrix decomposition as compared to the Hilbert, Peano, or Morton orders. The motivation lies in that cache efficient parallel adaptations for which the natural sequential evaluation order demonstrates lower cache miss rate result in overall faster performance on parallel machines with private or shared caches and on GPU's. © 2019 Elsevier Inc.
dc.identifier.doihttps://doi.org/10.1016/j.jpdc.2019.12.016
dc.identifier.eid2-s2.0-85077514130
dc.identifier.urihttp://hdl.handle.net/10938/25586
dc.language.isoen
dc.publisherAcademic Press Inc.
dc.relation.ispartofJournal of Parallel and Distributed Computing
dc.sourceScopus
dc.subjectCache-oblivious algorithms
dc.subjectExact linear algebra
dc.subjectMorton-hybrid order
dc.subjectSpace-filling curves
dc.subjectBig data
dc.subjectCache memory
dc.subjectCost benefit analysis
dc.subjectCryptography
dc.subjectDecoding
dc.subjectEncoding (symbols)
dc.subjectFactorization
dc.subjectLinear systems
dc.subjectTable lookup
dc.subjectDe facto standard
dc.subjectEncoding and decoding
dc.subjectHybrid orderings
dc.subjectMatrix decomposition
dc.subjectRectangular matrix
dc.subjectRecursive decomposition
dc.subjectSpace-filling curve
dc.subjectMatrix algebra
dc.titleExtending the limits for big data RSA cracking: Towards cache-oblivious TU decomposition
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2020-3397.pdf
Size:
2.67 MB
Format:
Adobe Portable Document Format