MatLogica | MatLogica Research & Publications: AAD, XVA, QuantLib

MatLogica Research & Publications: AAD, XVA, QuantLib

Research & Publications

Our team constantly develops new features, applications and benchmarks in automatic adjoint differentiation, XVA pricing, and quantitative finance optimization. Peer-reviewed research, case studies, and open-source benchmarks published in collaboration with Intel, academic institutions, and financial practitioners.

MatLogica Research in Automatic Adjoint Differentiation

MatLogica maintains an extensive research program in automatic adjoint differentiation (AAD), quantitative finance optimization, and high-performance computing. The research portfolio includes peer-reviewed academic papers, industry whitepapers, conference presentations, case studies, and open-source benchmarks. Key research areas: Automatic Adjoint Differentiation theory and applications, XVA pricing optimization (1770x speedup demonstrated with Intel), QuantLib and ORE performance acceleration, code generation techniques, neural network synthesis using AAD, calibration algorithms, Implicit Function Theorem applications, GPU vs CPU performance analysis, and autocallable derivatives Greeks calculation with 90% cost reduction. Notable collaborations: Intel Corporation (multiple performance whitepapers), Prof. Roland Olsson (neural network research achieving 3x better accuracy), Risk.net (peer-reviewed publications), WBS Quantitative Finance Conferences, academic institutions worldwide, and financial institutions (case studies). Open-source contributions: XVA Benchmark repository on GitHub, QuantLib integration examples, performance benchmarking tools, and reference implementations available to the quantitative finance community. Research impact: Publications demonstrate 1770x speedup for XVA pricing, 90% computational cost reduction for autocallable Greeks, 10x+ faster performance than ML frameworks, 100x acceleration for ORE LiveRisk, and 350x improvement for QuantLib XVA calculations. All results are reproducible with published benchmarks and code.

Accurate Greeks for Autocallables: AAD and Smoothing Solutions

Presented at WBS 2025 Conference in Palermo, this work demonstrates how combining mathematical smoothing with AAD solves the autocallable Greeks challenge. Achieve 90% computational cost reduction with production-ready C++ implementation, open-source benchmarks, and validated convergence for the $104B autocallable market.

Speeding Up QuantLib with Matlogica's AADC Library - by Maksim Kozyarchuk

In this blog post, Maksim explores how MatLogica's AADC supercharges QuantLib, dramatically accelerating pricing and risk calculations without requiring a rewrite. By leveraging cutting-edge compiler techniques and automatic differentiation, this integration unlocks unprecedented performance gains for quants and developers alike.

Transforming a Complex C++ System into a Lightning-Fast LiveRisk Service

ORE (Open Risk Engine) built on top of QuantLib presents a formidable challenge for quants and developers. While powerful, its complexity can make even seemingly simple tasks like setting up a Monte Carlo model quite daunting. This blog post demonstrates how we can transform ORE into a streamlined LiveRisk service for basic FX linear products using AADC.

Benchmark Results: AADC is 10x+ faster than JAX, PyTorch, and TensorFlow

A comprehensive benchmark comparing JAX, PyTorch and TensorFlow vs MatLogica AADC for Quantitative Applications—and the use of AADC in ML applications. Independent results with downloadable source code.

Continuing CPU Performance Gains for Matlogica Financial Analytics

This whitepaper by Intel demonstrates substantial performance gains from 5th Gen Intel® Xeon® processors, including up to 2.08x improvement compared to 3rd Gen Xeon CPUs, as well as up to 1.26x gains from Intel AVX-512 technology compared to predecessor Intel AVX2 technology—using MatLogica AADC for quantitative finance workloads.

Accelerating Financial Simulations: Code Generation Kernels Explained

In this post, we discuss the origins of performance and the possibilities unveiled by the AADC Code Generation Kernels for Monte Carlo simulations and derivatives pricing.

Automatic Adjoint Differentiation for Special Functions Involving Expectations

As published in Risk.NET. A peer-reviewed paper by José Brito, Andrei Goloubentsev, and Evgeny Goncharov on Risk.net used MatLogica's AADC to efficiently compute gradients for functions involving squares of expectations as is typical for calibration in quantitative finance.

Guilt-free Live-Risk in the Cloud: a New AAD-powered Approach

Discover a target architecture for a cloud-based Live Risk that uses Code Generation AAD™ to achieve fast and cheap computation of sensitivities, enabling guilt-free Live Risk with significant cost savings.

AAD Tools: Comparison of Approaches

A detailed analysis of AAD tools—comparing the technology, advantages, and disadvantages of tape-based, code-transformation, code-generation AAD tools and MatLogica AADC for quantitative finance applications.

How to Transition from Batch Risk to Real-time Risk

We present an elegant way to transition from overnight risk calculations to live risk without embarking on a multi-year IT transformation project. We show how the Automated Implicit Function Theorem (AIFT) and a modern Automatic Adjoint Differentiation (AAD) tool can be used in a real production code to achieve an 'always on' Risk Server, and we outline the steps required to transition.

Watch Now Read Now

An Elegant Approach to Run Existing CUDA Analytics on Both GPU and CPU, with Added Benefit of AAD

We know NVIDIA GPU offers a massive number of CUDA Cores, but CPUs are not far behind for quantitative workloads. See our whitepaper that demonstrates how your CUDA analytics can be accelerated by AADC on a CPU with the option of AAD.

Download Now Read Now

Case Study: How a Major European Bank Revolutionized Their Front-Office Risk Management Using MatLogica AADC

MatLogica's AADC enabled the client to supercharge their analytics by introducing AAD for risk computations and to accelerate pricing and scenario analysis. The MatLogica-enhanced analytics unlocked new revenue streams, lowered infrastructure costs, and improved risk management with 15-20x speedups.

Download Now

Automatic Synthesis of Neurons for Recurrent Neural Nets

Prof. Roland Olsson and his team used MatLogica's AADC to design state-of-the-art neural network architectures for time series analysis. It is up-to 3x more accurate than the available cutting-edge methods and the training time is several times lower due to MatLogica's technology.

Automatic Implicit Function Theorem

The paper demonstrates a way to apply the Implicit Function Theorem in a not widely known way, which is important for practical AAD application and performance, particularly with complex calibration in quantitative finance.

Adjoint Differentiation for Generic Matrix Functions

No doubt, AAD is amazing. However, implementing it in practice has a lot of subtleties. For instance, how to deal with operations requiring an SVD decomposition? Our researchers have found an elegant solution to this problem.

More Than a Thousand-fold Speedup for xVA Pricing Calculations with Intel® Xeon® Scalable Processors

Intel-led whitepaper demonstrating an up to 1770x performance increase for XVA pricing (and 830x for XVA risks!) on Intel processors when using Matlogica AADC. It is open-source and available at GitHub.

A New Approach to Parallel Computing Using Automatic Differentiation: Getting Top Performance on Modern Multicore Systems

A paper in Parallel Universe Magazine №40 featuring a new approach that turns object-oriented, single-thread, scalar code into AVX2/AVX512 vectorized multi-thread and thread-safe lambda functions with no runtime penalty for quantitative finance applications.

Open-Source Benchmark

Open-Source Benchmark demonstrating a leap in performance for valuation and AAD risk calculations using AADC on Intel Scalable Xeon CPUs. Code available for independent verification.

AAD and Calibration

Remarks on stochastic automatic adjoint differentiation and calibration of financial models for derivatives pricing.

AAD: Breaking the Primal Barrier

Dmitri Goloubentsev and Evgeny Lakshtanov wrote an article for Wilmott Magazine on how merging Code Transformation and Operator Overloading techniques leads to a major performance boost for automatic differentiation.