Products

Acceleration and AAD

With our products, you can code in traditional object-oriented languages such as C++ or Python, and MatLogica will take care of the tedious tasks such as performance optimization and automatic caching.

Modern compilers for object-oriented languages are not optimized for calculation-intensive tasks. Extensive use of abstraction and virtual functions allows developing an easy-to-read code but the performance penalty is high.

Writing high-performing vectorized and multi-thread safe code is a tedious and time-consuming task, while the result is usually hard to maintain. MatLogica helps developers to focus on adding value, taking care of performance.

From pure acceleration to fast AAD

Our product range

Accelerator

MatLogica’s accelerator utilizes native CPU vectorization and multi-threading, delivering performance comparable to a GPU. For problems such as Monte-Carlo simulations, historical analysis, "what if" scenarios, the speed can be increased by 6-100x, depending on the original performance.

AADC Library

Calculating derivatives is essential to finance, machine learning, and numerous other scientific and engineering industries. Our innovative compiler can speed up the AAD method itself, and deliver a pricing and scenario analysis unattainable with the competing products. MatLogica's approach enables AAD calculations in the legacy code, whereas others require extensive efforts and changing the source code.

GPU

We are actively researching GPU applications of our technology and expect to introduce the first GPU-compatible release in late 2022/early 2023. It will enable object-oriented programming (in C++ or Python) on a GPU without complicating the infrastructure.

A game-changing innovation

How do MatLogica's products work?

MatLogica’s easy-to-integrate JIT compiler converts user code (C++ or Python) into machine code with a minimal number of operations theoretically necessary to complete the task.

That results in far fewer operations loaded onto the CPU. We then add vectorization and multi-threading, extracting the maximal theoretically possible speed-up from a modern CPU - something other libraries fail to achieve.

MatLogica’s AADC library additionally computes the reverse accumulation equations directly in the machine code (other libraries use tape), resulting in a far better performance than the alternative approaches.

Our tests show that AADC calculates both the original function and derivatives faster than the original code calculates the function alone, often by a factor of 6-100. This is achieved with minimal changes to the original code since Matlogica’s compiler does virtually all the work.