AAIAPractice QuestionsAI AuditStudy GuideFree Questions

25 Free AAIA Practice Questions (With Explanations)

Dr. Baz Abouelenein, AAIA CISA CISM CRISC CISSP PMP
Dr. Baz Abouelenein
AAIA · CISA · CISM · CRISC · CISSP · PMP
May 6, 2026 · 14 min read
25 Free AAIA Practice Questions (With Explanations) — IT Audit Prep

Most IT auditors who fail the AAIA exam don't fail because they didn't study. They fail because they studied the wrong way — reading frameworks instead of applying them under time pressure.

The exam gives you 150 minutes for 90 questions. That's 100 seconds per question. At that pace, you can't reconstruct the NIST AI RMF MAP function from memory. You have to already know it.

Below are 25 free AAIA practice questions, mapped to the three exam domains at their actual weights: AI Governance and Risk (33%), AI Operations (46%), and AI Auditing Tools and Techniques (21%). Each answer includes an explanation of why the correct choice is right and why the distractors are wrong. That's the part most free question banks skip.

Take the diagnostic. Review the explanations. If you're guessing on more than a third of these, you're not ready to schedule the exam.

Domain 1: AI Governance and Risk (33%)

1

An organization is implementing the NIST AI Risk Management Framework (AI RMF 1.0). Which activity is a primary objective of the MAP function?

AEstablishing an AI ethics committee to oversee model deployment.
BDocumenting the intended use cases and potential societal harms of a new LLM.
CConducting a disparate impact analysis to test for demographic bias in the training data.
DConfiguring automated alerts for concept drift in a production environment.
Why

MAP defines the AI system's context — its purpose, stakeholders, and potential risks before deployment. Option A belongs to GOVERN. Option C belongs to MEASURE. Option D belongs to MANAGE. The exam tests whether you can assign activities to the right function, not just name the four functions.

2

Under the EU AI Act, an organization deploying an AI system to evaluate creditworthiness for consumer loans faces which regulatory classification?

AUnacceptable Risk (Prohibited).
BHigh Risk (Subject to strict obligations including human oversight and risk management systems).
CLimited Risk (Subject only to transparency disclosures).
DMinimal Risk (Unregulated).
Why

The EU AI Act classifies AI systems used in essential private services — credit evaluation included — as High Risk under Annex III. These systems must meet requirements under Article 16 before deployment, including a conformity assessment and registration in the EU database. Unacceptable Risk applies to social scoring by public authorities and real-time biometric surveillance, not credit tools.

3

An IT auditor reviews compliance with ISO/IEC 42001:2023. Which document is MOST likely requested to verify Clause 6.1.2.3 compliance?

AStatement of Applicability (SoA) mapping Annex A controls.
BAI Acceptable Use Policy signed by top management.
CAI System Impact Assessment detailing potential consequences to individuals and society.
DRoot cause analysis report for a recent model degradation incident.
Why

Clause 6.1.2.3 requires an AI System Impact Assessment to evaluate impacts on individuals, groups, and society. The SoA (Option A) satisfies Annex A control selection, not impact assessment. The acceptable use policy (Option B) is a governance document, not an impact assessment. Root cause analysis (Option D) is a corrective action artifact, not a planning document.

4

According to OECD AI Principles, which is NOT one of the five value-based principles for trustworthy AI?

AHuman-centred values and fairness.
BTransparency and explainability.
CZero-trust architecture and cryptographic agility.
DAccountability.
Why

The five OECD AI Principles are: inclusive growth and sustainable development, human-centred values and fairness, transparency and explainability, robustness and security, and accountability. Zero-trust architecture is a cybersecurity framework concept — it doesn't appear in the OECD principles. The exam frequently tests whether candidates conflate cybersecurity frameworks with AI governance frameworks.

5

A company deploys a generative AI chatbot for customer service. To meet the "transparency" requirement of trustworthy AI per NIST, which control is MOST appropriate?

AEncrypting all customer inputs in transit and at rest.
BEnsuring the chatbot clearly discloses it is an AI system.
CImplementing a web application firewall (WAF) to block malicious traffic.
DTraining the model exclusively on anonymized data.
Why

Transparency requires that users know they're interacting with an AI system and understand its limitations. Disclosure satisfies this requirement and aligns with EU AI Act Article 52 transparency obligations for chatbots. Encryption (Option A) and WAF (Option C) are security controls, not transparency controls. Training on anonymized data (Option D) addresses privacy, not transparency.

6

Which of the following protocols and practices is MOST important to consider when building AI?

AIndustry best practices.
BGeopolitical parameters.
CAI mission and vision.
DAI ethical standards.
Why

AI ethical standards provide the foundational principles — fairness, accountability, transparency, and non-maleficence — that govern how AI systems should be designed and operated. Industry best practices are derived from ethical standards, not the other way around. Geopolitical parameters are external constraints, not a governance protocol. Mission and vision guide organizational direction but do not constitute protocols or practices for building AI systems responsibly.

7

Under the EU AI Act, which category of AI system requires a conformity assessment before deployment?

AMinimal risk AI systems.
BLimited risk AI systems.
CHigh-risk AI systems.
DGeneral purpose AI models.
Why

The EU AI Act classifies AI systems into risk tiers. High-risk AI systems — including those used in critical infrastructure, employment decisions, credit scoring, and biometric identification — must undergo a conformity assessment before being placed on the market. Minimal risk systems face no mandatory requirements. Limited risk systems require transparency obligations. General purpose AI models have their own obligations under the Act but are governed by a separate chapter.

8

When reviewing an organization's AI data governance program, which of the following is MOST important to validate to ensure compliance with privacy regulations?

ASecurity of private cloud deployment of AI models.
BImplementation of privacy techniques in AI models.
CAvoidance of AI models that are based on open-source algorithms.
DFeasibility of AI model retraining to incorporate new privacy data.
Why

Privacy regulations such as GDPR and CCPA require that personal data be processed lawfully and with appropriate protections. Validating the implementation of privacy-preserving techniques — such as differential privacy, data minimization, anonymization, and access controls within AI models — directly addresses compliance. Cloud deployment security is an infrastructure concern, not a data governance one. Avoiding open-source algorithms is not a regulatory requirement.

9

Which of the following would be of GREATEST concern to an IS auditor reviewing an organization's AI policies and procedures?

AThe documentation of AI models does not address business resiliency and disaster recovery.
BThe data privacy policy has not been reviewed in the past three years.
CExternal validation is not required for AI systems before deployment.
DThe AI model does not have an approval process for production changes.
Why

The absence of a requirement for external validation before deployment means AI systems may go live without independent assurance that they are accurate, unbiased, safe, and compliant. External validation provides an objective check that internal teams cannot provide for their own systems. Missing disaster recovery documentation is a concern but is less critical than deploying unvalidated AI. A stale privacy policy is a compliance risk but is addressable through a review cycle.

Coming from a CISA background? Domain 2 is where the knowledge gap is widest.

Read: From CISA to AAIA in 90 Days →

Domain 2: AI Operations (46%)

10

A housing price prediction model's accuracy drops over six months without any code changes. Macroeconomic shifts have altered how buyers value property features. This is an example of:

AData poisoning.
BConcept drift.
CData drift.
DOverfitting.
Why

Concept drift occurs when the relationship between input features and the target variable changes — the model was trained on one world and is now operating in a different one. Data drift (Option C) refers to changes in the statistical distribution of input data alone, without a change in the underlying relationship. The distinction matters on the exam because the audit response differs: concept drift requires retraining, data drift may only require recalibration.

11

An attacker alters individual pixels on a stop sign image. Humans see a stop sign. The model classifies it as a speed limit sign. Which adversarial attack is this?

AModel inversion.
BPrompt injection.
CEvasion attack (adversarial example).
DData poisoning.
Why

Evasion attacks manipulate inputs at inference time to cause misclassification without altering the model itself. Model inversion (Option A) reconstructs training data from model outputs. Prompt injection (Option B) manipulates LLM behavior through crafted inputs. Data poisoning (Option D) corrupts training data before the model is built. The stop sign scenario is the canonical evasion attack example — it appears in ISACA study materials verbatim.

12

During training, the data science team uses 100% of labeled data to train the model. What is the primary risk?

AUnderfitting.
BInability to validate model performance on unseen data.
CVulnerability to prompt injection.
DExcessive training time exceeding compute budgets.
Why

Using all available data for training leaves no holdout set for validation. The model may perform well on training data but fail in production — a classic overfitting scenario. An auditor reviewing this setup should request evidence of a train/validation/test split. Option A (underfitting) is the opposite problem: too little training data or a model too simple for the task.

13

An organization builds an LLM for internal document search. To prevent the system from leaking sensitive HR data, which control is MOST effective?

AApply Role-Based Access Control (RBAC) on source documents before indexing by the RAG system.
BTrain the model to refuse answering HR data queries.
CEncrypt model weights at rest.
DConduct a disparate impact analysis on HR data.
Why

In a RAG architecture, the retrieval system fetches documents and passes them to the LLM as context. If access controls aren't enforced at the document retrieval layer, the LLM will surface any document it can reach — regardless of the user's authorization level. Training the model to refuse (Option B) is unreliable; LLMs can be prompted around behavioral guardrails. Encrypting weights (Option C) protects the model itself, not the data it retrieves.

14

Which metric is MOST appropriate for a classification model where false negatives carry the highest cost (e.g., cancer detection)?

APrecision.
BRecall (Sensitivity).
CAccuracy.
DF1 Score.
Why

Recall measures the proportion of actual positives the model correctly identifies. Minimizing false negatives — missed cancer cases — requires maximizing recall. Precision (Option A) minimizes false positives, which matters more in spam filtering than medical diagnostics. Accuracy (Option C) is misleading on imbalanced datasets. F1 Score (Option D) balances precision and recall but doesn't optimize for either. The exam tests whether you can match the metric to the business risk, not just define each metric.

15

The GREATEST risk to an organization training an AI system with data from a single source is:

AA single point of failure.
BA lack of flexibility.
CUndesired homogenization.
DInsufficient transparency.
Why

Relying on a single data source creates a single point of failure: if that source is corrupted, biased, unavailable, or compromised, the entire model's integrity is at risk. Lack of flexibility and undesired homogenization are secondary concerns that may result from single-source training, but they do not represent the greatest risk. Insufficient transparency is a governance concern independent of data sourcing.

16

A company is developing an AI system to generate videos and images. Which option would BEST enable the company to mitigate harm caused by deepfakes?

AData sanitization.
BDifferential privacy.
CWatermarking.
DModel encryption.
Why

Watermarking embeds an imperceptible signal into AI-generated media that allows the content to be traced back to its source system, enabling detection of deepfakes and attribution of synthetic content. Data sanitization addresses training data quality. Differential privacy protects individual data points during training but does not help identify generated content post-deployment. Model encryption protects the model itself from unauthorized access but does not address the harm caused by deepfake outputs.

17

If business objectives require an AI solution that continually learns from its outputs, an IS auditor should confirm risk and controls around:

ABackpropagation.
BApplying biases.
CApplying weights.
DActivation functions.
Why

Backpropagation is the mechanism by which a neural network updates its parameters based on the error between predicted and actual outputs — it is the core process that enables continual learning. An AI system that learns from its own outputs uses backpropagation to adjust the model, which introduces risks of feedback loops, runaway drift, and uncontrolled model change. Biases, weights, and activation functions are components of the model architecture but are not the process through which the model learns from outputs.

18

An IS auditor learns that the organization's AI solution is configured with web integration enabled. Which of the following is the MOST important control for the auditor to validate?

AData augmentation activities prior to model building.
BKey performance indicator (KPI) metrics for model inference time.
CSeparation of duties between the model creator and the model tester.
DActivity logging with integration to the organization's SIEM system.
Why

Web integration exposes the AI system to external inputs and outputs, significantly expanding the attack surface and the potential for misuse or data exfiltration. Activity logging integrated with the SIEM system provides the visibility needed to detect anomalous behavior, unauthorized access, and potential incidents in real time. Data augmentation is a pre-training concern. Inference time KPIs are performance metrics, not security controls. Separation of duties is important but is not the most critical control when web integration is the specific risk.

19

An AI system is misclassifying images after a routine model update. An IS auditor discovers that the updated model file was replaced by an unauthorized version. Which of the following is the auditor's BEST recommendation?

ADisable the automated update process to prevent future issues.
BImmediately retrain the model from scratch using a secure data set.
CRevert to the last verified model version and initiate a root cause analysis.
DNotify all users of potential inaccuracies and deactivate the system.
Why

Reverting to the last verified model version immediately restores the integrity of the system while preserving the ability to investigate the incident. Root cause analysis then identifies how the unauthorized replacement occurred and what controls failed. Disabling the automated update process is a reactive measure that does not address the underlying security gap and may disrupt operations. Retraining from scratch is unnecessary and time-consuming when a verified version is available.

Want a deeper breakdown of all three domains before continuing?

Read: AAIA Exam Domains Explained →

Domain 3: AI Auditing Tools and Techniques (21%)

20

An auditor plans to audit an organization's AI inventory containing hundreds of models. To optimize scope, the auditor should FIRST:

ASelect a random sample of 25 models for substantive testing.
BRequest source code for all models to review secure coding practices.
CPerform an inherent risk assessment based on model use cases (e.g., hiring, credit decisions, medical triage).
DInterview the Chief Data Officer about data retention policies.
Why

A risk-based approach prioritizes models with higher inherent risk for detailed testing. Random sampling (Option A) ignores risk concentration. Requesting all source code (Option B) is impractical and skips scoping entirely. Interviewing the CDO (Option D) is useful but not the first step — you need a risk-ranked inventory before you know which conversations matter.

21

To evaluate the explainability of a black-box neural network used for loan approvals, which provides the BEST audit evidence?

AA signed statement from the lead data scientist confirming model accuracy.
BDocumentation of SHAP (SHapley Additive exPlanations) values showing each feature's contribution to individual predictions.
COverall accuracy score (e.g., 95% on the test dataset).
DPenetration test results against the model's API.
Why

SHAP values quantify each feature's contribution to individual predictions, providing explainability evidence that can be reviewed and challenged. A signed statement (Option A) is management representation, not audit evidence. Accuracy (Option C) measures performance, not explainability. Penetration testing (Option D) addresses security, not transparency of decision logic.

22

Auditing a generative AI system that drafts marketing copy, where outputs vary probabilistically, the auditor should:

AConclude the system cannot be audited and disclaim an opinion.
BTest the design and operating effectiveness of governance, data quality, and monitoring controls.
CManually review 1,000 outputs to calculate an error rate.
DRequire replacement with a deterministic rules-based system.
Why

Probabilistic AI systems can't be audited by testing individual outputs — the output space is too large and non-deterministic. The audit approach shifts to evaluating the controls around the system: how outputs are reviewed before publication, how quality is monitored, and how incidents are escalated. Disclaiming an opinion (Option A) is not appropriate when a controls-based approach is available. Reviewing 1,000 outputs (Option C) is impractical and statistically unreliable for this purpose.

23

An internal audit team uses an AI tool for anomaly detection on the general ledger. The primary risk is:

AAutomation reducing the need for human auditors.
BExcess false positives causing alert fatigue and ignored valid alerts.
CViolation of data privacy policy by accessing financial records.
DThe audit team needing to learn Python programming.
Why

High false positive rates cause auditors to treat alerts as noise. When that happens, real anomalies get buried. Alert fatigue is a documented failure mode in AI-assisted audit and security operations. Option A describes a workforce concern, not an audit risk. Option C is a governance risk that should be addressed before deployment, not the primary operational risk. Option D is a training gap, not a system risk.

24

An auditor reviews an AI incident response plan. Which scenario MUST the playbook address?

ANetwork outage delaying model retraining by two hours.
BThird-party LLM API provider deprecating the model version the organization uses.
CData scientist requesting a GPU workstation upgrade.
DMinor formatting change to the monthly AI performance report.
Why

Dependency on third-party AI models creates supply chain risk. When a provider deprecates a model version — which OpenAI, Google, and Anthropic all do on rolling schedules — organizations that haven't planned for it face sudden capability loss. The incident response plan must address this scenario. Options A, C, and D describe operational events that don't require incident response procedures.

25

An organization uses AI to automate inventory counts across multiple distribution centers. What BEST supports automated inventory verification without physical site visits?

ANatural language processing from shipping documents.
BMachine learning to predict replenishment levels.
CComputer-vision cameras connected to AI systems.
DRobotic process automation to reconcile inventory records.
Why

Computer-vision cameras connected to AI systems can directly observe and count physical inventory in real time, enabling automated verification without requiring auditors to be on-site. NLP from shipping documents processes text records but cannot verify physical stock. ML-based replenishment prediction forecasts future needs rather than verifying current counts. RPA reconciles records in systems but cannot confirm physical inventory levels independently.

How Did You Score?

If terms like concept drift, SHAP values, evasion attacks, and the MAP function felt unfamiliar, that's the gap the exam will find. The AAIA doesn't test whether you've read the frameworks. It tests whether you can apply them to scenarios you haven't seen before.

Reading the official guide is a start. Passing requires enough repetition that the frameworks and the MLOps lifecycle become automatic — not something you reconstruct under time pressure.

1,155 Questions. Every Domain. Every Framework.

Other AAIA apps exist, but AAIA Prep has the deepest question bank. 1,155 practice questions mapped to the 33/46/21 domain weighting — nearly double the competition. Every question includes a full explanation. The domain accuracy dashboard shows exactly where you're losing points. Most candidates who pass on the first attempt spend 6 to 8 weeks cycling through questions until their readiness score holds above 75%.

  • 1,155 Practice Questions: Mapped to the 33/46/21 domain weighting. Scenario-based, framework-mapped, exam-difficulty.
  • Domain Accuracy Dashboard: Track performance by domain. See exactly where you're losing points.
  • 8 Adaptive Study Modes: Including Weakest Subject mode that targets your gaps automatically.
  • Full 90-Question Mock Exams: Scaled 200–800 scoring to test readiness under timed conditions.
  • 21 AI Governance Frameworks: All 21 frameworks tested by ISACA, mapped to the domains where they appear.
  • 200 Spaced-Repetition Flashcards: Retain technical vocabulary of MLOps, adversarial threats, and governance frameworks.
Download Free on the App Store

Want a deeper breakdown of each exam domain?

Read: AAIA Exam Domains Explained →

References

  1. [1]ISACA. "AAIA Exam Content Outline." https://www.isaca.org/credentialing/aaia/aaia-exam-content-outline
  2. [2]NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." https://www.nist.gov/itl/ai-risk-management-framework
  3. [3]ISO. "ISO/IEC 42001:2023 — AI management systems." https://www.iso.org/standard/42001
  4. [4]European Parliament. "EU Artificial Intelligence Act." https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Get new articles by email

New posts on AAIA, CISA, and AI governance — no spam, unsubscribe any time.

Share this article

Found this useful? Share it with your network.