25 Free AAIA Practice Questions (With Explanations)

Most IT auditors who fail the AAIA exam don't fail because they didn't study. They fail because they studied the wrong way — reading frameworks instead of applying them under time pressure.
The exam gives you 150 minutes for 90 questions. That's 100 seconds per question. At that pace, you can't reconstruct the NIST AI RMF MAP function from memory. You have to already know it.
Below are 25 free AAIA practice questions, mapped to the three exam domains at their actual weights: AI Governance and Risk (33%), AI Operations (46%), and AI Auditing Tools and Techniques (21%). Each answer includes an explanation of why the correct choice is right and why the distractors are wrong. That's the part most free question banks skip.
Take the diagnostic. Review the explanations. If you're guessing on more than a third of these, you're not ready to schedule the exam.
Domain 1: AI Governance and Risk (33%)
An organization is implementing the NIST AI Risk Management Framework (AI RMF 1.0). Which activity is a primary objective of the MAP function?
MAP defines the AI system's context — its purpose, stakeholders, and potential risks before deployment. Option A belongs to GOVERN. Option C belongs to MEASURE. Option D belongs to MANAGE. The exam tests whether you can assign activities to the right function, not just name the four functions.
Under the EU AI Act, an organization deploying an AI system to evaluate creditworthiness for consumer loans faces which regulatory classification?
The EU AI Act classifies AI systems used in essential private services — credit evaluation included — as High Risk under Annex III. These systems must meet requirements under Article 16 before deployment, including a conformity assessment and registration in the EU database. Unacceptable Risk applies to social scoring by public authorities and real-time biometric surveillance, not credit tools.
An IT auditor reviews compliance with ISO/IEC 42001:2023. Which document is MOST likely requested to verify Clause 6.1.2.3 compliance?
Clause 6.1.2.3 requires an AI System Impact Assessment to evaluate impacts on individuals, groups, and society. The SoA (Option A) satisfies Annex A control selection, not impact assessment. The acceptable use policy (Option B) is a governance document, not an impact assessment. Root cause analysis (Option D) is a corrective action artifact, not a planning document.
According to OECD AI Principles, which is NOT one of the five value-based principles for trustworthy AI?
The five OECD AI Principles are: inclusive growth and sustainable development, human-centred values and fairness, transparency and explainability, robustness and security, and accountability. Zero-trust architecture is a cybersecurity framework concept — it doesn't appear in the OECD principles. The exam frequently tests whether candidates conflate cybersecurity frameworks with AI governance frameworks.
A company deploys a generative AI chatbot for customer service. To meet the "transparency" requirement of trustworthy AI per NIST, which control is MOST appropriate?
Transparency requires that users know they're interacting with an AI system and understand its limitations. Disclosure satisfies this requirement and aligns with EU AI Act Article 52 transparency obligations for chatbots. Encryption (Option A) and WAF (Option C) are security controls, not transparency controls. Training on anonymized data (Option D) addresses privacy, not transparency.
Which of the following protocols and practices is MOST important to consider when building AI?
AI ethical standards provide the foundational principles — fairness, accountability, transparency, and non-maleficence — that govern how AI systems should be designed and operated. Industry best practices are derived from ethical standards, not the other way around. Geopolitical parameters are external constraints, not a governance protocol. Mission and vision guide organizational direction but do not constitute protocols or practices for building AI systems responsibly.
Under the EU AI Act, which category of AI system requires a conformity assessment before deployment?
The EU AI Act classifies AI systems into risk tiers. High-risk AI systems — including those used in critical infrastructure, employment decisions, credit scoring, and biometric identification — must undergo a conformity assessment before being placed on the market. Minimal risk systems face no mandatory requirements. Limited risk systems require transparency obligations. General purpose AI models have their own obligations under the Act but are governed by a separate chapter.
When reviewing an organization's AI data governance program, which of the following is MOST important to validate to ensure compliance with privacy regulations?
Privacy regulations such as GDPR and CCPA require that personal data be processed lawfully and with appropriate protections. Validating the implementation of privacy-preserving techniques — such as differential privacy, data minimization, anonymization, and access controls within AI models — directly addresses compliance. Cloud deployment security is an infrastructure concern, not a data governance one. Avoiding open-source algorithms is not a regulatory requirement.
Which of the following would be of GREATEST concern to an IS auditor reviewing an organization's AI policies and procedures?
The absence of a requirement for external validation before deployment means AI systems may go live without independent assurance that they are accurate, unbiased, safe, and compliant. External validation provides an objective check that internal teams cannot provide for their own systems. Missing disaster recovery documentation is a concern but is less critical than deploying unvalidated AI. A stale privacy policy is a compliance risk but is addressable through a review cycle.
Coming from a CISA background? Domain 2 is where the knowledge gap is widest.
Read: From CISA to AAIA in 90 Days →Domain 2: AI Operations (46%)
A housing price prediction model's accuracy drops over six months without any code changes. Macroeconomic shifts have altered how buyers value property features. This is an example of:
Concept drift occurs when the relationship between input features and the target variable changes — the model was trained on one world and is now operating in a different one. Data drift (Option C) refers to changes in the statistical distribution of input data alone, without a change in the underlying relationship. The distinction matters on the exam because the audit response differs: concept drift requires retraining, data drift may only require recalibration.
An attacker alters individual pixels on a stop sign image. Humans see a stop sign. The model classifies it as a speed limit sign. Which adversarial attack is this?
Evasion attacks manipulate inputs at inference time to cause misclassification without altering the model itself. Model inversion (Option A) reconstructs training data from model outputs. Prompt injection (Option B) manipulates LLM behavior through crafted inputs. Data poisoning (Option D) corrupts training data before the model is built. The stop sign scenario is the canonical evasion attack example — it appears in ISACA study materials verbatim.
During training, the data science team uses 100% of labeled data to train the model. What is the primary risk?
Using all available data for training leaves no holdout set for validation. The model may perform well on training data but fail in production — a classic overfitting scenario. An auditor reviewing this setup should request evidence of a train/validation/test split. Option A (underfitting) is the opposite problem: too little training data or a model too simple for the task.
An organization builds an LLM for internal document search. To prevent the system from leaking sensitive HR data, which control is MOST effective?
In a RAG architecture, the retrieval system fetches documents and passes them to the LLM as context. If access controls aren't enforced at the document retrieval layer, the LLM will surface any document it can reach — regardless of the user's authorization level. Training the model to refuse (Option B) is unreliable; LLMs can be prompted around behavioral guardrails. Encrypting weights (Option C) protects the model itself, not the data it retrieves.
Which metric is MOST appropriate for a classification model where false negatives carry the highest cost (e.g., cancer detection)?
Recall measures the proportion of actual positives the model correctly identifies. Minimizing false negatives — missed cancer cases — requires maximizing recall. Precision (Option A) minimizes false positives, which matters more in spam filtering than medical diagnostics. Accuracy (Option C) is misleading on imbalanced datasets. F1 Score (Option D) balances precision and recall but doesn't optimize for either. The exam tests whether you can match the metric to the business risk, not just define each metric.
The GREATEST risk to an organization training an AI system with data from a single source is:
Relying on a single data source creates a single point of failure: if that source is corrupted, biased, unavailable, or compromised, the entire model's integrity is at risk. Lack of flexibility and undesired homogenization are secondary concerns that may result from single-source training, but they do not represent the greatest risk. Insufficient transparency is a governance concern independent of data sourcing.
A company is developing an AI system to generate videos and images. Which option would BEST enable the company to mitigate harm caused by deepfakes?
Watermarking embeds an imperceptible signal into AI-generated media that allows the content to be traced back to its source system, enabling detection of deepfakes and attribution of synthetic content. Data sanitization addresses training data quality. Differential privacy protects individual data points during training but does not help identify generated content post-deployment. Model encryption protects the model itself from unauthorized access but does not address the harm caused by deepfake outputs.
If business objectives require an AI solution that continually learns from its outputs, an IS auditor should confirm risk and controls around:
Backpropagation is the mechanism by which a neural network updates its parameters based on the error between predicted and actual outputs — it is the core process that enables continual learning. An AI system that learns from its own outputs uses backpropagation to adjust the model, which introduces risks of feedback loops, runaway drift, and uncontrolled model change. Biases, weights, and activation functions are components of the model architecture but are not the process through which the model learns from outputs.
An IS auditor learns that the organization's AI solution is configured with web integration enabled. Which of the following is the MOST important control for the auditor to validate?
Web integration exposes the AI system to external inputs and outputs, significantly expanding the attack surface and the potential for misuse or data exfiltration. Activity logging integrated with the SIEM system provides the visibility needed to detect anomalous behavior, unauthorized access, and potential incidents in real time. Data augmentation is a pre-training concern. Inference time KPIs are performance metrics, not security controls. Separation of duties is important but is not the most critical control when web integration is the specific risk.
An AI system is misclassifying images after a routine model update. An IS auditor discovers that the updated model file was replaced by an unauthorized version. Which of the following is the auditor's BEST recommendation?
Reverting to the last verified model version immediately restores the integrity of the system while preserving the ability to investigate the incident. Root cause analysis then identifies how the unauthorized replacement occurred and what controls failed. Disabling the automated update process is a reactive measure that does not address the underlying security gap and may disrupt operations. Retraining from scratch is unnecessary and time-consuming when a verified version is available.
Want a deeper breakdown of all three domains before continuing?
Read: AAIA Exam Domains Explained →Domain 3: AI Auditing Tools and Techniques (21%)
An auditor plans to audit an organization's AI inventory containing hundreds of models. To optimize scope, the auditor should FIRST:
A risk-based approach prioritizes models with higher inherent risk for detailed testing. Random sampling (Option A) ignores risk concentration. Requesting all source code (Option B) is impractical and skips scoping entirely. Interviewing the CDO (Option D) is useful but not the first step — you need a risk-ranked inventory before you know which conversations matter.
To evaluate the explainability of a black-box neural network used for loan approvals, which provides the BEST audit evidence?
SHAP values quantify each feature's contribution to individual predictions, providing explainability evidence that can be reviewed and challenged. A signed statement (Option A) is management representation, not audit evidence. Accuracy (Option C) measures performance, not explainability. Penetration testing (Option D) addresses security, not transparency of decision logic.
Auditing a generative AI system that drafts marketing copy, where outputs vary probabilistically, the auditor should:
Probabilistic AI systems can't be audited by testing individual outputs — the output space is too large and non-deterministic. The audit approach shifts to evaluating the controls around the system: how outputs are reviewed before publication, how quality is monitored, and how incidents are escalated. Disclaiming an opinion (Option A) is not appropriate when a controls-based approach is available. Reviewing 1,000 outputs (Option C) is impractical and statistically unreliable for this purpose.
An internal audit team uses an AI tool for anomaly detection on the general ledger. The primary risk is:
High false positive rates cause auditors to treat alerts as noise. When that happens, real anomalies get buried. Alert fatigue is a documented failure mode in AI-assisted audit and security operations. Option A describes a workforce concern, not an audit risk. Option C is a governance risk that should be addressed before deployment, not the primary operational risk. Option D is a training gap, not a system risk.
An auditor reviews an AI incident response plan. Which scenario MUST the playbook address?
Dependency on third-party AI models creates supply chain risk. When a provider deprecates a model version — which OpenAI, Google, and Anthropic all do on rolling schedules — organizations that haven't planned for it face sudden capability loss. The incident response plan must address this scenario. Options A, C, and D describe operational events that don't require incident response procedures.
An organization uses AI to automate inventory counts across multiple distribution centers. What BEST supports automated inventory verification without physical site visits?
Computer-vision cameras connected to AI systems can directly observe and count physical inventory in real time, enabling automated verification without requiring auditors to be on-site. NLP from shipping documents processes text records but cannot verify physical stock. ML-based replenishment prediction forecasts future needs rather than verifying current counts. RPA reconciles records in systems but cannot confirm physical inventory levels independently.
How Did You Score?
If terms like concept drift, SHAP values, evasion attacks, and the MAP function felt unfamiliar, that's the gap the exam will find. The AAIA doesn't test whether you've read the frameworks. It tests whether you can apply them to scenarios you haven't seen before.
Reading the official guide is a start. Passing requires enough repetition that the frameworks and the MLOps lifecycle become automatic — not something you reconstruct under time pressure.
1,155 Questions. Every Domain. Every Framework.
Other AAIA apps exist, but AAIA Prep has the deepest question bank. 1,155 practice questions mapped to the 33/46/21 domain weighting — nearly double the competition. Every question includes a full explanation. The domain accuracy dashboard shows exactly where you're losing points. Most candidates who pass on the first attempt spend 6 to 8 weeks cycling through questions until their readiness score holds above 75%.
- 1,155 Practice Questions: Mapped to the 33/46/21 domain weighting. Scenario-based, framework-mapped, exam-difficulty.
- Domain Accuracy Dashboard: Track performance by domain. See exactly where you're losing points.
- 8 Adaptive Study Modes: Including Weakest Subject mode that targets your gaps automatically.
- Full 90-Question Mock Exams: Scaled 200–800 scoring to test readiness under timed conditions.
- 21 AI Governance Frameworks: All 21 frameworks tested by ISACA, mapped to the domains where they appear.
- 200 Spaced-Repetition Flashcards: Retain technical vocabulary of MLOps, adversarial threats, and governance frameworks.
Want a deeper breakdown of each exam domain?
Read: AAIA Exam Domains Explained →References
- [1]ISACA. "AAIA Exam Content Outline." https://www.isaca.org/credentialing/aaia/aaia-exam-content-outline
- [2]NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." https://www.nist.gov/itl/ai-risk-management-framework
- [3]ISO. "ISO/IEC 42001:2023 — AI management systems." https://www.iso.org/standard/42001
- [4]European Parliament. "EU Artificial Intelligence Act." https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Related Articles
From CISA to AAIA in 90 Days: Bridging the Knowledge Gap
The transition from CISA to AAIA is not a simple step up. It is a fundamental shift in how you view systems and risk. This guide maps the gaps and how to close them.
AAIA Exam Domains Explained: Where IT Auditors Struggle
The AAIA exam divides into three domains weighted 33/46/21. Most IT auditors pass Domain 1 and struggle with Domain 2. Here is what the operations domain actually tests and where preparation breaks down.
Get new articles by email
New posts on AAIA, CISA, and AI governance — no spam, unsubscribe any time.
