#symboliccomputation

waynerad@diasp.org

Symbolic computation for robotics. "Symbolic computation centers around manipulation of mathematical expressions as algebraic instead of numerical quantities. Functions, symbols, and literals are stored as classes in code that can be traversed and transformed with symbolic substitution, simplification, differentiation, and solving."

"SymForce supports two symbolic backends -- SymPy and SymEngine. SymEngine is a C++ implementation compatible with the SymPy API, but is dramatically faster for manipulating large expressions. It is the default choice."

"SymForce implements several core types used in robotics, such as matrices, rotations, poses, camera models, noise models, and barrier functions. These types have symbolic implementations defined as Python classes. Fast runtime classes with identical interfaces are autogenerated for target languages."

"The generated classes do not depend on the rest of SymForce and can be deployed as standalone tools with minimal dependencies. For example, the generated C++ classes depend only on Eigen and the Python types only on NumPy. In C++ they are templated on the scalar type, and require zero dynamic memory allocation, making them suitable for embedded environments. The autogenerated runtime classes are functional, with readable interfaces and excellent test coverage."

A lot of the system concerns something in mathematics called Lie Groups. "Rotations, poses, and camera models are central to robotics code, and getting efficient and correct tangent-space Jacobians is a common task for many optimization use cases. GTSAM defines geometry types and their Lie calculus in C++ and wraps them in Python." GTSAM is a non-linear optimization library. It actually stands for Georgia Tech Smoothing and Mapping Library. You might be able to guess where it was created.

"To allow generic code to operate on all geometry types we use a concepts (or traits) mechanism inspired by GTSAM. Using concepts instead of inheritance allows generic programming with external types such as language-defined scalar and sequence types, or NumPy and Eigen matrices. It also makes it easy to extend or add new types, both within SymForce and in user code." "In Python, this mechanism is implemented with dynamic dispatch. In C++, this is done via template specialization."

"A critical step in code generation is common subexpression elimination, which is the process of traversing a symbolic expression to find duplicate intermediate terms and
pulling them out into temporary variables that are computed only once. Common subexpression elimination results in enormous efficiency gains."

"SymForce gains enormous performance advantages by generating runtime functions that consist of a single branchless chain of instructions that share all common subexpressions." "In realistic scenarios, most larger functions are too costly for the compiler to inline." "Symbolic code addresses this problem with explicit separation between the symbolic and the runtime contexts. The symbolic code is written with small, composable functions, but any evaluated quantities are generated as flat expressions amenable to optimization."

"This process of flattening also helps with cache performance."

"SymForce can yield order of magnitude speedups in the multiplication of matrices that include zero entries. Any amount of sparsity will lead to a large number of terms that
do not need to be computed, as they would otherwise be multiplied by zero at runtime."

By multiplying matrices symbolically and generating code for the result, we both exploit the structure of the matrices and share expressions between them."

"In robotics and computer vision, matrix multiplication is prevalent in transformations, projections, uncertainty propagation, and especially for Jacobians during automatic differentiation. Performance gains compound from longer chains of matrix multiplications and more complex shared terms between them. SymForce can flatten code across thousands of function calls and matrix multiplications into a single branchless function that shares all common subexpressions."

"Symbolic expressions can be algebraically simplified into forms that are faster to compute. Categories of simplifications include expansion, factorization, term collection, cancellation,
fraction decomposition, trigonometric and logarithmic identities, series expansions, and limits."

"Symbolic differentiation has compelling advantages over automatic differentiation, both by requiring less handwritten code, and by sharing more subexpressions and eliminating the need for matrix multiplication at runtime." Automatic differentiation is what tools like PyTorch excel at.

"Automatic differentiation is the prevalent approach for computing derivatives of large computation graphs. Given a computation graph, automatic differentiation produces another computation graph to compute a particular derivative, with the size of the resulting graph no bigger than a constant multiple of the original. It is often claimed that symbolic differentiation is intractable or produces exponentially large computation graphs, and is therefore unusable for nontrivial computations.

"Representing the derivative as a flattened symbolic expression allows for powerful simplifications across function and matrix multiplication boundaries, for instance in the common case where the Jacobians used by automatic differentiation contain shared terms or zeros. As a result, our symbolic differentiation and code generation approach outperforms runtime automatic differentiation for many robotics problems.

"Lie groups are common parameterizations in robotics and computer vision. When computing a 'derivative' of a function whose input is a member of a Lie group, typically the desired quantity is the derivative with respect to a perturbation in the tangent space around the input."

"In most packages, painstaking care is taken to hand-write these tangent-space derivatives. SymForce computes them all automatically. We provide two approaches for this -- symbolic application of the chain rule and first-order retraction."

"Many branches can be formulated with primitives like the sign function, absolute value, floor, min, and max." "These operations are performed with bit operations at the assembly
level, and do not introduce true branches."

"Functions encountered in robotics are often smooth, but properly addressing singularity points is critical to avoid NaN values."

"Our method is to shift the input to the function away from the singular point with an infinitesimal variable."

Sorry to quote so much from the paper, but I really didn't see any way I could improve upon what the original authors wrote.

At this point you're probably wondering how much this improves performance? For a matrix multiply in C++, they were able to get a speedup of 1.7x to 12.5x depending on the matrix size, sparsity pattern, and what CPU the computation was performed on. In another benchmark, they counted operations for a combination of matrix inversion, Jacobian, and matrix multiplication. For autodifferentiation, this took 356 operations, while their "flattening" system resulted in 91 operations, an improvement of 3.9x.

SymForce: Symbolic computation and code generation for robotics

#solidstatelife #ai #computerscience #symboliccomputation