FAQ

Frequently Asked Questions

This section contains most frequently asked questions. If you have any other questions talk to us!

Request a demo

  • AADC is a just-in-time graph compiler tailored for complex repetitive calculations and Automatic Adjoint Differentiation.

  • Using the operator overloading technique with the “idouble” active type allows AADC to extract the valuation graph and form binary instructions that replicate user analytics at runtime.

  • Unlike traditional compilers, AADC does not use your source code directly. Instead, the program is executed just for one data sample (such as a single MC path) so that a sequence of primitive numerical operations (such as +,-, exp()) can be sent to AADC and encoded into equivalent assembly codes for direct CPU execution.

  • Speed and ease of integration! The analytics will run much faster (x6-x100) utilizing the native CPU vectorization, multi-threading, and AAD. Code optimization at runtime is much more efficient than what regular compilers can do ahead of execution. You can code in an object-oriented language and get data-oriented performance! The adjoint code will take less time than the code transformation.

  • The convenience of the object-oriented design does not come free. The performance is lost in function calls, virtual functions, and object-oriented abstractions. At runtime, we can get better in-lining, better memory allocation, and full avx2/avx512 vectorization.

  • Traditional Operator Overloading libraries can only accelerate analytical risk calculations and actually slow down the original code! AADC uses a patent-pending technology that can speed up risk, pricing, historical, and what-if scenarios. It is way easier to integrate and is future-proof, taking advantage of the multi-core hardware and GPU using the same interface.

  • Easy! Our active type is a drop-in replacement for the native type. We can integrate AADC in 2 weeks.

  • AADC can handle implicit functions almost automatically!

  • It can be done with just a two-fold memory increase! Ask us for a reference implementation.

  • AADC is perfect for scripting languages. Get your custom payoffs compiled (at runtime) into an optimal binary code and apply AAD with it. No runtime interpreter penalty anymore!

  • 3rd party functions are handled using a special wrapper interface. You can provide local derivatives or use bump-and-revalue locally.

  • No! It would be way too slow. We've invested a lot in R&D to create a really fast streaming compiler.

  • You can send AADC Kernels to cloud providers for execution. This way you can keep your data and analytics safe on premises.

  • We support C++ and Python. More languages to come!

  • Although we don't support GPU at the moment, it is possible to use analytics implemented for GPU with AADC. With minimal changes your existing CUDA code can be adapted for AADC and executed using multi-threading and vectorization on a CPU. Unlike GPU, CPU has plenty of memory to solve large problems and support AAD!