MatLogica | FAQ

FAQ

Frequently Asked Questions About AADC

Get answers to common questions about MatLogica AADC implementation, performance, integration, and technical capabilities. Can't find what you're looking for?

Talk to Our Team

Common Questions About AADC Implementation

Find answers to frequently asked questions about MatLogica AADC - from technical implementation details to performance benchmarks, integration timelines, and cloud deployment. This FAQ is regularly updated based on questions from quant developers, risk managers, and technical decision makers.

Quick Links:

Technical Architecture & Core Concepts

  • AADC (Automatic Adjoint Differentiation Compiler) is a just-in-time graph compiler specifically designed for complex repetitive calculations and Automatic Adjoint Differentiation.

    Key characteristics:

    • Generates optimized binary kernels at runtime
    • Leverages AVX2/AVX512 vectorization automatically
    • Provides thread-safe multi-core parallelization
    • Achieves 6-100x speedup over traditional approaches
    • Computes all derivatives with adjoint factor <1

    Use cases: XVA calculations, Monte Carlo pricing, real-time risk (Live Risk), model calibration, scenario analysis, stress testing.

  • Operator overloading with the "idouble" active type allows AADC to extract the valuation graph during a single execution of your analytics.

    Process:

    1. Execute function once with idouble type (records operations)
    2. AADC captures sequence of elementary operations
    3. Forms binary instructions that replicate your analytics
    4. Generates both forward and adjoint code
    5. Optimizes for AVX2/AVX512 vectorization

    Advantage: No source code access required - works at runtime with your existing codebase.

  • AADC is fundamentally different from traditional compilers:

    Traditional Compilers (C++, LLVM):

    • Work with source code at compile time
    • Don't know runtime data patterns
    • Can't optimize for specific computational paths
    • Too slow for runtime code generation

    AADC:

    • Executes program for one data sample (e.g., single MC path)
    • Captures actual sequence of primitive numerical operations (+, -, exp(), etc.)
    • Encodes into optimized assembly for direct CPU execution
    • Generates code in milliseconds, not seconds
    • Knows exact control flow paths taken at runtime

    Result: Better optimization than ahead-of-time compilers because AADC sees the actual execution pattern.

  • Performance & Speedup

  • Two Major Benefits: Speed and Ease of Integration

    Performance Gains:

    • 6-100x faster analytics utilizing native CPU vectorization, multi-threading, and AAD
    • Adjoint factor <1: Original + all derivatives faster than original alone
    • 23x faster than tape-based Adept for XVA (single core benchmark)
    • Linear scaling with number of cores

    Why So Fast:

    • Runtime optimization more efficient than ahead-of-time compilation
    • Full AVX2/AVX512 vectorization (4-8 samples per cycle)
    • Optimal memory allocation patterns
    • Perfect in-lining based on actual execution flow

    Key Advantage: Code in object-oriented style, get data-oriented performance automatically. Adjoint code generation takes less time than code transformation approaches.

  • Object-oriented code convenience has performance costs that AADC eliminates:

    Problems with Traditional OO Code:

    • Function call overhead
    • Virtual function indirection
    • Object-oriented abstractions
    • Poor vectorization due to scattered data
    • Missed optimization opportunities

    AADC Runtime Optimizations:

    • Better in-lining: Eliminates all function call overhead
    • Optimal memory allocation: Data laid out for vectorization
    • Full AVX2/AVX512 vectorization: 4-8 operations per cycle
    • Loop unrolling: Reduces branch prediction misses
    • Constant folding: Eliminates repeated calculations

    Example: Traditional OO Monte Carlo might process 1 path at a time. AADC automatically processes 8 paths simultaneously using AVX512.

  • Critical Difference: AADC Accelerates Everything, Not Just Risk

    Traditional Tape-Based AAD:

    • Only accelerates analytical risk calculations
    • Actually slows down the original code (tape overhead)
    • Adjoint factor 2-5x
    • High memory usage (tape storage)
    • Cannot accelerate second-order Greeks or scenarios

    AADC Code Generation AAD:

    • Speeds up risk, pricing, historical VaR, and what-if scenarios
    • Adjoint factor <1: Faster than original alone
    • Low memory usage
    • Accelerates second-order Greeks via bump-and-revalue of AAD deltas
    • 5-20x faster than tape-based AAD for organizations already using AAD

    Integration: Easier to integrate than tape-based, no tape management complexity

    Future-proof: Takes advantage of multi-core hardware automatically; GPU support planned

    Technology: Patent-pending approach using specialized JIT compiler

  • Integration & Implementation

  • Easy Integration - Initial Results in 2 Weeks

    Drop-in Replacement: AADC's active type (idouble) is a drop-in replacement for native double type.

    Typical Integration Timeline:

    • Week 1-2: Initial integration, first results
    • Week 3-4: Expand to key pricing/risk functions
    • Week 5-8: Full production integration
    • Ongoing: Extend to additional models and products

    Semi-Automated Approach: MatLogica's standardized integration methodology delivers results quickly in controlled fashion

    No Major Code Changes:

    • No template metaprogramming required
    • No code transformation needed
    • No control flow restrictions
    • Works with legacy code

    Support: MatLogica provides integration support, debugging tools, and checkpointing toolkit

  • Third-party functions handled via special wrapper interface:

    Two Options:

    1. Provide local derivatives: If you know the mathematical derivatives
    2. Use bump-and-revalue locally: For black-box components

    How It Works:

    • Wrap third-party function calls with AADC interface
    • AADC maintains AAD chain through the system
    • Only the specific third-party component uses alternative method
    • Rest of system benefits from full AAD acceleration

    Common Use Cases:

    • Proprietary vendor pricing libraries
    • Legacy Fortran code
    • Third-party calibration routines
    • External PDE solvers

    Performance: Minimal impact - only the wrapped function uses finite differences, everything else gets full AAD

  • Advanced Features

  • Yes - AADC handles implicit functions almost automatically!

    Automated Implicit Function Theorem (IFT): MatLogica's breakthrough technology automates differentiation of calibration routines

    What It Means:

    • Exact-fit calibration: Fully automated differentiation (e.g., yield curve bootstrap)
    • Nearly-exact calibration: Approximate solutions (e.g., volatility surface fitting)
    • Solver-based routines: Newton-Raphson, optimization, root-finding

    Applications:

    • Yield curve calibration with AAD
    • Volatility surface fitting with sensitivities
    • Model parameter calibration (Heston, SABR, etc.)
    • American option pricing with LSM

    Benefit: No manual derivative coding for calibration routines - enables Live Risk with real-time recalibration

    Read more about Automated IFT for Live Risk

  • AAD for LSM is fully supported with only 2x memory increase!

    Challenge with LSM: Regression at each time step creates data dependencies making straightforward AAD expensive

    AADC Solution: Efficient implementation requiring only two-fold memory increase (much better than naive approach)

    Applications:

    • American option pricing with automatic Greeks
    • Bermudan swaption Greeks
    • Callable bonds with AAD
    • Early exercise features in structured products

    Performance: Full AAD speedup maintained despite regression complexity

    Ask us for a reference implementation - contact info@matlogica.com

  • AADC is perfect for scripting languages - best of both worlds!

    The Problem with Scripting:

    • Flexibility and ease of use (good)
    • Severe runtime interpreter penalty (bad)
    • No vectorization or optimization (bad)

    AADC Solution:

    • Write payoffs/models in scripting language
    • AADC compiles them at runtime into optimal binary code
    • Apply AAD automatically
    • No runtime interpreter penalty anymore!

    Benefits:

    • Traders can define custom payoffs easily
    • Get compiled performance without C++ complexity
    • Automatic Greeks for custom products
    • Rapid product development

    Use Cases:

    • Custom structured product payoffs
    • User-defined trading strategies
    • Domain-specific languages (DSLs) for quants
    • Python-based analytics with C++ performance
  • Deployment & Infrastructure

  • No - external compilers would be way too slow for runtime compilation

    Why LLVM/C++ Don't Work:

    • LLVM compilation takes seconds to minutes
    • Designed for compile-once, run-many-times
    • Too slow for changing portfolios or intraday pricing
    • Not practical for production Live Risk

    MatLogica's Investment: Significant R&D to create really fast streaming compiler specifically for:

    • Runtime code generation
    • Millisecond compilation times
    • Direct machine code emission
    • AAD-specific optimizations

    Result: Compilation fast enough for practical use - new trades, portfolio changes, intraday recalibration all handled seamlessly

  • Yes - AADC is designed for secure cloud deployment!

    Cloud Deployment Model:

    • Send AADC kernels to cloud as encrypted binary code
    • Keep data and analytics safe on premises
    • Only computational instructions go to cloud
    • Nearly impossible to reverse-engineer proprietary models

    Benefits:

    • Up to 99% cloud cost reduction vs traditional approaches
    • Data sovereignty maintained (compliance-friendly)
    • Elastic scaling without exposing IP
    • Burst compute for stress testing

    Architecture: "Guilt-free Live Risk" - get cloud scalability without security compromise

    Read full cloud Live Risk architecture guide

  • Currently Supported Languages:

    • C++: Full support, most mature integration
    • C#: .NET integration for Windows-based systems
    • Python: Bindings for Python-based analytics

    Language Integration Patterns:

    • Native C++ for maximum performance
    • C# for .NET quant libraries
    • Python wrappers for data science workflows
    • Custom DSL compilation support

    More Languages Coming: Additional language support planned for future releases based on customer demand

    Mixed-Language Support: Can integrate AADC into systems using multiple languages - common in large financial institutions

  • GPU support is planned, but CPU performance often beats GPU for quant workloads

    Current Status: AADC doesn't generate GPU code yet, but your existing CUDA code can be adapted

    Migration Path:

    • With minimal changes, existing CUDA code adapts for AADC
    • Executes using multi-threading and vectorization on CPU
    • Often achieves comparable or better performance than GPU

    Why CPU Often Wins for Quant:

    • Memory: CPUs have plenty of memory (64GB-512GB+) vs GPU constraints (8-48GB)
    • AAD Support: Better suited for adjoint differentiation
    • Complex Models: Handle irregular control flow better
    • No Data Transfer: Eliminate PCIe bottleneck
    • Deployment: Easier in production (no GPU management)

    Performance: AVX512 CPU with AADC often matches or exceeds GPU performance for typical quant workloads while supporting larger problem sizes

    Future: GPU support planned for specialized use cases where massive parallelism benefits outweigh limitations

  • No Lock-In - Your Code Remains Yours

    Realistic Assessment:

    • MatLogica's core product is tech-heavy JIT compiler
    • Not many businesses willing to develop their own compilers
    • Development, maintenance, and improvement efforts shared across multiple satisfied users
    • Lower cost than in-house development

    No Vendor Lock-In:

    • Run your analytics with or without AADC seamlessly
    • Simple compile-time switch to disable AADC
    • Code works with native doubles if needed
    • Can always go back to the way things were before
    • Can switch to alternative AAD tool

    Exit Strategy:

    1. Your source code is unchanged (just type replacements)
    2. Remove idouble, use native double
    3. Recompile and run as before
    4. Or switch to different AAD library

    Risk Mitigation: Many organizations maintain dual-mode capability - can run with or without AADC for validation and business continuity

Still Have Questions?

Can't find the answer you're looking for? Our team of experts is here to help.

Schedule a Technical Discussion

Talk to our technical team about your specific use case, integration requirements, and performance expectations.

Book a Call

Request a Benchmark

See AADC performance on your actual code. We can run a benchmark on your production models to show real-world speedups.

Contact Us

Related Resources

Topics covered: AADC FAQ, automatic adjoint differentiation questions, code generation AAD integration time, AADC vs tape-based AAD comparison, JIT compiler AAD, operator overloading AAD, adjoint factor less than 1, cloud AADC deployment, GPU CUDA AAD support, implicit function theorem AAD, Longstaff-Schwartz AAD, scripting language AAD, third-party library integration AAD, AVX2 AVX512 vectorization, multi-threading AAD, drop-in replacement AAD