AAIAAI AuditExam PrepISACADomain Guide

AAIA Exam Domains Explained: Where IT Auditors Struggle

B
Dr. Baz Abouelenein
AAIA · CISA · CISM · CRISC · CISSP · PMP
April 27, 2026 · 8 min read

The ISACA Advanced in AI Audit (AAIA) exam has 90 questions and a 150-minute time limit. It tests your ability to evaluate risks and controls in artificial intelligence systems. CISA holders know IT general controls, access management, and change management. The AAIA domains require a different frame. You are auditing probabilistic models, not deterministic software.

The exam divides into three domains: AI Governance and Risk (33%), AI Operations (46%), and AI Auditing Tools and Techniques (21%). Domain 2 is where most IT auditors lose points. It is the largest section, the most technical, and the furthest from traditional audit experience.


Domain 1: AI Governance and Risk (33%)

Domain 1 tests your ability to advise stakeholders on AI implementation aligned with organizational goals, create ethical AI policies, and mitigate risks. It establishes the vocabulary and regulatory context for the rest of the exam.

The Subtopics

  • A: AI Models, Considerations, and Requirements: Understand differences between generative AI, predictive models, and deterministic algorithms, plus their unique risks.
  • B: AI Governance and Program Management: Covers oversight structures, including AI ethics boards and cross-functional committees.
  • C: AI Risk Management: Identify, assess, and mitigate risks related to bias, explainability, and security.
  • D: Privacy and Data Governance Programs: Apply data minimization and purpose limitation to large training datasets.
  • E: Leading Practices, Ethics, Regulations, and Standards for AI: Know frameworks like NIST AI RMF, ISO 42001, and the EU AI Act.

Where Auditors Fail

Traditional auditors assume standard IT risk management applies fully to AI. It does not.

Software failures in IT are usually binary — a code error produces a wrong output. AI failures can occur when a model functions correctly but produces biased outputs because the training data was skewed. You are governing a system that learns and changes over time, not a static system patched periodically.

Generic answers about "establishing a steering committee" fail if the question targets ISO 42001's AI System Impact Assessment. Memorize the structural requirements of the 21 AI frameworks tested. The exam distinguishes between them at the control level.


Domain 2: AI Operations (46%)

Domain 2 is the largest and most technical section. It tests your ability to assess AI risk profiles, operational readiness, and the controls that govern the machine learning lifecycle.

The Subtopics

  • A: Data Management Specific to AI: Covers data lineage, quality, and controls to prevent data poisoning.
  • B: AI Solution Development Methodologies and Lifecycle: Understand MLOps, continuous training, and model deployment.
  • C: Change Management Specific to AI: Manage updates to models that continuously learn.
  • D: Supervision of AI Solutions: Monitor outputs, impacts, and decisions in production.
  • E: Testing Techniques for AI Solutions: Know cross-validation, fairness testing, and adversarial robustness testing.
  • F: Threats and Vulnerabilities Specific to AI: Includes prompt injection, model inversion, and model extraction.
  • G: Incident Response Management Specific to AI: Respond to AI system degradation or compromise.

Where Auditors Fail

Most IT auditors lack practical experience with the machine learning lifecycle. Without understanding training, validation, and holdout testing data, you cannot audit model performance metrics.

Drift is the concept that trips candidates most often. Data drift means the statistical properties of input data change over time. Concept drift means the relationship between the input and the target variable shifts. A credit card fraud detection model loses accuracy if consumer spending habits change — as they did sharply in 2020. Controls that detect drift and trigger retraining are what you audit. Not the model outputs themselves.


Domain 3: AI Auditing Tools and Techniques (21%)

Domain 3 tests audit techniques tailored to AI systems and the use of AI-enabled tools to improve audit efficiency.

The Subtopics

  • A: Audit Planning and Design: Scope audits of complex, integrated AI systems.
  • B: Audit Testing and Sampling Methodologies: Sample probabilistic outputs of generative AI models.
  • C: Audit Evidence Collection Techniques: Define sufficient evidence when auditing black-box neural networks.
  • D: Audit Data Quality and Data Analytics: Use AI tools to enhance audit processes.
  • E: AI Audit Outputs and Reports: Communicate residual AI risk to boards.

Where Auditors Fail

Auditors apply traditional substantive testing to AI outputs. Testing 25 transactions cannot verify AI model function. Outputs are probabilistic — the same input can generate different outputs depending on model architecture and state.

Test controls around the model: governance structures, training data quality checks, monitoring alerts. Not just outputs. Explainability is a separate audit objective. If a model denies a loan, the organization must explain why to customers and regulators. Verify that explainability tools like SHAP or LIME are implemented and functioning — not just present in documentation.


The Cross-Domain Skills Tested

ISACA lists 23 cross-domain skills that cut across all three domains. They confirm the AAIA tests practical advisory ability, not just technical knowledge.

  • Evaluating AI's impact on workforce training and education.
  • Assessing AI vendors and supply chain management.
  • Evaluating data input requirements for AI models: appropriateness, bias, privacy.
  • Using AI solutions to improve audit planning, execution, and reporting.

How to Prepare

Reading whitepapers will not pass this exam. Practice applying concepts to audit scenarios.

Know the difference between data drift and concept drift. Know which NIST AI RMF function — Govern, Map, Measure, or Manage — applies to a given audit scenario. Know what ISO 42001's AI System Impact Assessment requires and when it triggers. These distinctions appear on the exam as scenario questions, not definition recalls.

Candidates who pass on the first attempt typically spend 6 to 8 weeks cycling through practice questions until their domain accuracy stabilizes. Domain 2 is where most of that time goes.

Download AAIA Prep on the App Store

AAIA Prep is the only extensive iOS app built for the ISACA Advanced in AI Audit exam. It includes 1,155 practice questions mapped to the 33/46/21 domain weighting, a Domain Accuracy Dashboard, 200 spaced-repetition flashcards, and full 90-question mock exams.

  • 1,155 Practice Questions: Mapped to the 33/46/21 domain weighting.
  • Domain Accuracy Dashboard: Track performance by domain. Identify where to focus study time.
  • 200 Spaced-Repetition Flashcards: Retain technical vocabulary of MLOps and adversarial threats.
  • Full 90-Question Mock Exams: Scaled scoring to test readiness under timed conditions.
Download Free on the App Store

References

  1. [1]ISACA. "AAIA™ Exam Content Outline." https://www.isaca.org/credentialing/aaia/aaia-exam-content-outline
Share this article

Found this useful? Share it with your network.