Built with
01 / Overview
Modern financial institutions rely on machine learning models for real-time fraud detection, yet their complexity introduces critical security vulnerabilities through adversarial AI attacks. This research implements diffusion-based adversarial purification and synthetic data augmentation to stress-test and harden ML defenses without requiring continual adversarial retraining.
Using the IEEE-CIS Fraud Detection dataset, the framework generates distributionally realistic adversarial examples via Diff-PGD (Diffusion-based Projected Gradient Descent) and purifies corrupted inputs back to clean data manifolds through TabDiff reverse diffusion, achieving superior robustness while maintaining 96% ROC-AUC baseline performance.
02 / Process03 / Process
01
02
03
03 / Impact04 / Impact
"Diffusion models represent a paradigm shift in generative modeling applied to adversarial defense, offering a compelling combination of high-fidelity data synthesis and efficient, non-retraining based purification mechanisms. This makes them exceptionally well-suited for the demanding, high-stakes environment of real-time financial fraud detection."