OOPSLAFast Linear Programming through Transprecision Computing on Small and Sparse Data
A plethora of program analysis and optimization techniques rely on linear programming at their heart. However, such techniques are often considered too slow for production use. While today’s best solvers are optimized for complex problems with thousands of dimensions, linear programming, as used in compilers, is typically applied to small and seemingly trivial problems, but to many instances in a single compilation run. As a result, compilers do not benefit from the decades of research on optimizing large-scale linear programming. We design a simplex solver targeted at compilers. A novel theory of transprecison computation applied from individual elements to full data-structures provides the computational foundation. By carefully combining it with optimized representations for small and sparse matrices and specialized small-coefficient algorithms, we (1) reduce memory traffic, (2) exploit wide vectors, and (3) use low-precision arithmetic units effectively. We evaluate our work by embedding our solver into a state-of-the-art integer set library and implement one essential operation, coalescing, on top of our transprecision solver. Our evaluation shows multiple orders- of-magnitude speedup on the core simplex pivot operation and a mean speedup 3.2x (vs. GMP) and 4.6x (vs. IMath) for the optimized coalescing operation. Our results demonstrate that optimizations for low dimensionality and small coefficients exploit the wide SIMD instructions of modern microarchitectures effectively. Our work on optimizing a rational simplex solver as well as a single key operation on integer sets, coalescing, is the first of several steps towards a future where integer set libraries based on transprecision arithmetic power compiler analysis and optimization frameworks.
Publisher