post-img
  • Home >
  • Resources >
  • Gen AI, Agentic AI & AGI for Internal Compliance Audits: The Future Has Already Started
Measuring/certifying security programs Operating security programs Designing security programs AI and automation Measuring/certifying security programs Operating security programs Designing security programs AI and automation CMMC

Gen AI, Agentic AI & AGI for Internal Compliance Audits: The Future Has Already Started

  • copy-link-icon

    Copy URL

  • linkedin-icon

AI is transforming internal compliance audits. It reduces manual effort, uncovers risks sooner, and helps teams move faster. This guide shows how audit teams are using GenAI and agentic AI today — and how they can start preparing for what’s next.

Role of AI in internal compliance audits today

AI is assuming a real, hands-on role in internal compliance audits, not as a futuristic idea, but as a practical tool. AI is helping teams reduce manual effort, increase visibility, and shift from one-time checklists to more responsive, risk-driven audits.

Chris Oshaben, compliance automation advisor

Chris Oshaben, a compliance automation advisor with certifications including CISA, CRISC, and CDPSE, explains that AI auditing tools can be categorized into three groups: large language models (LLMs), machine learning (ML), and workflow automation.

He says all three AI types can apply to compliance audits by performing the following functions:

  • Collecting and validating audit evidence: AI tools can pull system data, compare it to compliance control requirements, and flag inconsistencies or missing items.
  • Detecting anomalies and control failures: Machine learning models scan large datasets to identify outliers, exceptions, and control gaps, often in real time.
  • Summarizing documents and policies: Large language models can interpret complex documentation and generate clear, usable summaries for audit workpapers.
  • Drafting audit outputs: From test-of-design narratives to full reports, AI can generate first-pass content that saves time and reduces rework.
  • Managing audit workflows: Automation handles recurring steps, such as sending audit requests, coordinating stakeholder responses, and routing approvals.

As Oshaben explains, one of the most immediate wins comes from automating routine work.

“Building custom flows to automate manual processes is the key here, and can be effective for evidence collection, stakeholder communications, automated approval workflow, and other use cases that used to be manual,” he says.

Micah Spieler, Chief Product Officer at Strike GraphMicah Spieler, Chief Product Officer at Strike Graph, agrees that the real value of AI emerges when it reduces friction in everyday audit work. One of the biggest pain points, he says, is how long it takes to get back up to speed after stepping away.

“There’s a steep learning curve every time you revisit a compliance program you haven’t touched in 6 or 12 months,” Spieler notes. “That’s incredibly inefficient.”

These tools are already reshaping the audit process, and in the next section, we’ll examine where AI transformation is having the greatest day-to-day impact.

AI doesn’t usually enter the compliance audit process all at once. It appears in specific, often subtle ways: a tool that flags missing evidence, a workflow that triggers itself, or a model that highlights an unexpected pattern in system logs.

Over time, those individual upgrades begin to accumulate. The compliance audit process becomes faster, more focused, and easier to scale.

Here’s where AI is having the most impact right now:

  • Catching issues earlier: ML models can flag anomalies and control gaps as they occur, not weeks or months later.

  • Getting through documentation faster: LLMs help teams review compliance policies, procedures, and supporting evidence quickly, pulling out key points instead of starting from scratch.

  • Reducing low-value work: Automation handles repeatable tasks, such as requesting documents, logging findings, and sending follow-ups.

“This isn’t about replacing human judgment,” says Spieler. “It’s about focusing that judgment where it matters most.”

Oshaben agrees — and cautions that judgment can only be effective if the AI systems feeding it are grounded in the right context.

“These AIs do not have the context that we do in our own brains,” Oshaben says. “This is especially true for LLMs, which are ineffective at standardization and produce a lot of ‘slop’ as they will assume their own context to fill in the gaps.”

When used thoughtfully, AI enables audit teams to focus on real risks rather than repetitive tasks. However, as Spieler and Oshaben both point out, the quality of those insights depends on keeping people informed and ensuring that AI systems are fed the right context. That’s how teams shift from reactive audits to more continuous, business-aligned assurance.

How GenAI is accelerating the internal audit life cycle

Generative AI is accelerating internal compliance audits by automating tasks that previously took hours or even days. From summarizing evidence to drafting audit findings, it’s helping teams move faster without sacrificing quality.

Oshaben sees generative AI, especially large language models, as a natural fit for audits and compliance. Oshaben notes that one common use case for LLMs is writing or revising policies: “If I were updating, writing, or interpreting a policy, I would absolutely utilize an LLM,” he says.

Here’s where generative AI is gaining traction in compliance audit workflows:

  • Drafting audit reports and findings: LLMs can produce initial language for control reviews, exceptions, and remediation recommendations, based on structured inputs and control data.

  • Interpreting and describing evidence: Instead of starting from scratch, auditors can feed documents to an LLM and get a concise summary of what’s relevant.

  • Writing test narratives: LLMs can describe how a control is designed or operated, helping auditors build out workpapers more efficiently.

  • Generating audit requests: Based on control descriptions and test procedures, GenAI tools can create tailored request lists to send to stakeholders.

  • Creating reusable audit knowledge: Generative AI can serve as a living knowledge base of your organization’s systems, risks, and evidence when paired with retrieval-augmented generation (RAG), a method that provides AI with real-time access to trusted internal content.

GenAI tools help reduce rework, improve consistency across documentation, and give audit teams more time to focus on analysis and higher-risk issues.

Generative AI tools aren’t just promising — they’re already helping audit teams simplify documentation, highlight risks, and shorten review cycles.

Some well-known models, like ChatGPT and Claude, are being used to:

  • Draft risk narratives tailored to specific frameworks or findings.

  • Translate technical evidence into plain-language summaries.

  • Simulate test scenarios to evaluate control design or failure points.

These capabilities help teams focus more precisely, avoiding oversampling and zeroing in on high-risk areas.

Some audit platforms are now embedding GenAI directly into their systems. For example:

  • Thomson Reuters’ Audit Intelligence Analyze uses generative AI to scan and categorize transaction data, helping auditors spot red flags more quickly. That shortens the audit timeline and reduces the need for manual checks.

  • Strike Graph’s Verify AI runs automated quality checks on every piece of submitted evidence:
    • A difference check compares current documents to past submissions and flags significant changes.
    • A description check analyzes whether the content actually matches the description or label.
      If either check finds a mismatch, the evidence is flagged for review.

“Customers tell us Verify AI helps them think more intentionally about the evidence they provide,” says Spieler. “It gives real-time feedback, which makes their programs stronger.”

These examples show that GenAI isn’t limited to generating paragraphs. It’s being built into core audit functionality, where it can improve quality, catch errors, and support better decisions throughout the audit life cycle.

Current best practices for using generative AI in internal audits 

Generative AI can be a powerful tool in internal audits, but only when used with the right safeguards. Without clear inputs and controls, these systems can produce summaries that look polished but miss the point, or worse, misstate facts.

Here are five practical guidelines for using GenAI effectively and responsibly in audit workflows:

  • Start with clean, structured inputs
    Generative AI performs best when it has clearly labeled data. Controls, test procedures, and evidence should be tagged and framed in a way that the system can interpret consistently.

  • Use secure environments for testing
    When piloting GenAI tools, avoid exposing sensitive audit data. Test new workflows in sandboxed environments using dummy inputs before deploying them to production.

  • Always keep a human in the loop
    LLMs can generate good first drafts, but auditors need to verify accuracy, logic, and alignment with frameworks. Final sign-off should always rest with a human reviewer.

  • Train the model on internal policies and controls
    Generic language models won’t know your organization’s frameworks unless you teach them. Embedding company-specific data, via fine-tuning or retrieval-augmented generation (RAG), improves output relevance and reduces hallucinations.

    With RAG, the model generates responses grounded in your actual controls, policies, or frameworks.

    “Most RAG LLMs ensure that only data from provided sources are referenced so that it only operates with the context given,” Oshaben says.

  • Validate output against audit standards
    Don’t assume GenAI understands ISO 27001, SOC 2, or internal audit frameworks such as the Institute of Internal Auditors’ International Professional Practices Framework (IPPF), which guides how audits are planned, executed, and reported. Always cross-check AI-generated summaries and recommendations against applicable standards and documentation.


With the right setup, GenAI can help audit teams move faster and with more consistency, without losing oversight or control.

Check out the Top 5 AI best practices for your security program for a clear starting point to deploying generative AI for your team. We also cover AI security issues in our SecureTalk podcast. Two of my favorite episodes that apply to what we are discussing today are: The AI wars and what DeepSeek means to AI and security and Unlocking AI’s potential privately, safely and responsibly with Dan Clarke

Agentic AI is already starting to reshape how internal audits get done. While it’s still early in its evolution, audit platforms are now incorporating agents that can perform multi-step tasks, such as triggering evidence requests, verifying document quality, and escalating exceptions, without manual initiation.

These tools go beyond GenAI’s text generation or traditional automation’s rule-following. Agentic AI systems can interpret audit context, activate other tools, and adapt their actions based on the data they receive. They’re not replacing auditors, but they are reducing the need for hand-holding at every step.

Spieler puts it this way: “Agentic AI was meant to describe AI that can use other tools in its chain of thought. It detects a need, activates the right tool, and updates its own memory. That’s where things get really interesting.”

This new model is still developing, but early implementations are already helping teams speed up work, reduce friction, and maintain focus on high-risk decisions, without getting bogged down in busywork.

Still, many in the space see agentic AI not as an overnight shift, but as the beginning of a long, careful evolution, especially in fields like compliance, where risk tolerance is low.

Guru Sethupathy, Founder and CEO of FairnowGuru Sethupathy, Founder and CEO of Fairnow, an AI governance platform that simplifies AI risk management and compliance, sees this as a period of necessary experimentation.

“We are in the hype portion of the AI cycle,” he says. “But hype actually serves an important purpose. Hype attracts dollars and talent to experiment. And all of that experimentation accelerates the process of the market figuring out the best use cases and applications of a new technology.”

Sethupathy adds a caution: “We have to be careful with how we use AI in compliance. Compliance is a high-risk area, and if an AI system hallucinates and that leads to non-compliance, that is a risk most companies do not want to take. So we (FairNow) are figuring out what parts of compliance can be automated with low risk while also building a specialized compliance-optimized agent that has superior performance where it matters.”

Agentic AI is already being used to manage audit tasks that previously required significant human oversight. Today, intelligent agents in compliance workflows can:

  • Automate evidence requests
    When documentation is missing or outdated, agents can detect the gap and automatically contact the relevant control owner.


  • Classify and tag documents
    Instead of relying on manual sorting, AI agents scan and categorize evidence for faster review and retrieval.


  • Run risk models in real time
    As new data becomes available, agents adjust risk scores and deliver live updates to support informed decision-making.


  • Flag control gaps across frameworks
    By mapping requirements from SOC 2, ISO 27001, or HIPAA, agents can surface where evidence is weak or misaligned.


As Spieler explains, these agentic steps are already part of real workflows. “We’re already using workflows with agentic steps,” he says. “It’s not just one GenAI prompt and response. It’s a series of tasks — analyzing, verifying, escalating — that chain together without manual input.”

These AI agents are already handling tasks like reviewing access logs, confirming encryption settings, and cross-checking user roles across systems — functions that once required manual review.

As these systems mature, agentic AI could become the backbone of continuous, intelligent compliance, not just an assistant, but a second set of hands that never gets tired.

Tips for using agentic AI in internal audits

To use agentic AI effectively in internal audits, start with well-defined, repeatable tasks. Always keep human judgment in the loop, and choose task-specific agents that are easy to test, monitor, and refine. The goal isn’t full automation. It’s smarter workflows with tighter oversight.

Agentic AI can lead to faster, more accurate audits, but only if it’s deployed responsibly. One of the biggest risks is too much reliance on third-party AI providers without clear data protections.

Spieler explains: “If a platform just outsources everything to a third-party AI, that’s a red flag. You lose control of your data, and if that AI goes down, your system goes down with it.”

Security, transparency, and the ability to tailor workflows to your environment are essential when adopting agentic AI. Here’s how to get started the right way:

  1. Start with repeatable tasks
    Utilize agents to handle consistent, rules-based tasks such as evidence collection, scoping, or testing controls, where their impact is immediate and measurable.


  2. Use task-specific agents, not general-purpose bots
    Design agents to do one thing well. A dedicated scoping agent will outperform a generic assistant trying to do everything.


  3. Define when to escalate to a human
    Set clear rules for when an AI agent should defer to a person, especially when a decision involves risk, exceptions, or policy interpretation.


  4. Continuously monitor agent performance
    Track how agents perform over time. Look for trends in success rates, flag volume, and review feedback to identify areas for improvement.


  5. Prioritize secure deployment
    Keep agents running inside secure, sandboxed environments. Avoid tools that require sending sensitive data to third parties unless absolutely necessary.


Trust is fundamental, especially when semi-autonomous systems are handling audit tasks.

“If I can’t understand how a workpaper or conclusion was created, I can’t trust it,” Oshaben says.

The future of internal audits done completely with AGI 

Artificial General Intelligence (AGI) refers to AI systems that can learn, reason, and adapt across a wide range of knowledge areas. In a compliance audit setting, this could mean understanding business processes, interpreting regulations, identifying risks, and deciding how to test controls, all without being limited to one narrow task.

Imagine an audit process that runs continuously in the background, automatically analyzing systems, validating controls, updating reports, and escalating concerns in real time. Audit wouldn’t be a project anymore. It would be a system.

How OpenAI’s 5 Stages Towards AGI is applied to compliance

OpenAI has defined five stages of progress toward the vision of Artificial General Intelligence (AGI). The graphic below illustrates Strike Graph's vision for the evolution from chat-based helpers to autonomous audit systems.

In 2025, most compliance teams that use AI are somewhere between Stage 1 and Stage 2. Each stage brings more automation, autonomy, and strategic insight into the audit process.

Strike Graph’s 5 Stages of AI-Driven Compliance towards AGI

What happens to AI in internal audits when we reach AGI?

If AGI becomes part of internal audits, it won’t just change who does the work—it will redefine the process entirely. Audits could become fully autonomous, with AI systems mapping scope, testing controls, analyzing risk, and generating reports continuously. Human auditors would shift into oversight and exception-handling roles.

Here’s what that shift might look like:

  • Audit prep disappears
    AGI could map business processes and define audit scope automatically—without meetings, templates, or kickoff calls.


  • Testing is fully automated
    Instead of sampling, an AGI system could test entire data populations, simulate edge cases, and detect exceptions in real time.


  • Reporting becomes real-time
    Status dashboards and audit trails would update continuously, offering live assurance rather than delayed snapshots.


  • Auditors become reviewers and advisors
    The human role would shift toward validating outputs, reviewing exceptions, and managing judgment calls that require ethics or nuance.


There is much debate about how close we really are to achieving AGI.

“AGI is a moving target,” Spieler says. “Academics think it’s 10–15 years away. Tech leaders say it’s just around the corner. But no one really agrees on what ‘generally intelligent’ even means.” 

He adds, “Personally, I think we’re much further away from AGI than people expect. Large language models can predict language, but they don’t think. That’s a key limitation.”

Oshaben believes the audit profession will adapt in tandem with AGI. He sees a future with “fewer internal auditors, leveraging agentic tools to do more audits. Auditors could work in harmony with agentic tools to conduct better, context-driven audits.” 

He also expects “faster end-to-end processes, cutting down time to reporting from 6–12 months on average to less than 30 days.”

Sethupathy views AI development through the lens of other evolutions, such as the internet and the cloud.

“AI adoption will not happen overnight,” he says. “It will take years, even a decade or more. AI technology will improve much faster than AI adoption because there are two challenges that need to be overcome for adoption. First, companies need to figure out how to manage the risks related to AI. Second, companies need to figure out how to implement AI solutions in their workflows, upskill talent to work with AI, and so on. All that takes time.”

He adds: “Think about how long it took for companies to become digital companies, or data companies, or cloud companies. Enterprise AI adoption will be even harder in many ways.”

In the meantime, the journey toward AGI is already reshaping audit work through its early building blocks — agentic AI, retrieval-augmented systems, and integrated reasoning models. Teams that learn to work with these technologies now will be well-positioned if and when the next leap arrives.

Some experts believe the eventual leap beyond AGI could be “superintelligence,AI that exceeds human intelligence across all domains. While theoretical, the concept underscores the seemingly endless possibilities of artificial intelligence.

How to pick an AI-powered compliance management tool

The right AI tool can make audits faster, smarter, and easier to manage. However, not every platform that claims to be “AI-powered” is built to handle real audit complexity. Some simply bolt on generative features. Others rely on third-party models that give you little control or insight into how your data is being used.

Oshaben gives this plain advice: “Ask the vendor to prove to you ways that you can trust its AI output. If they cannot provide proof that you can trust outputs, then you probably cannot.”

Here’s what to look for when evaluating AI-enabled compliance platforms:

  • AI-native architecture
    Choose software built from the ground up for AI, not tools where AI was added later as a bolt-on. Native platforms tend to be faster, more stable, and easier to scale.


  • Support for both GenAI and agentic AI
    Generative AI helps with drafting and summarizing. Agentic AI supports real automation, like scoping audits, collecting evidence, or verifying controls.


  • Clear data ownership and model transparency
    Ensure you understand how your data is handled, where it’s stored, and whether it’s used to train external models. Avoid black-box AI, where you can’t audit the audit tool.


  • Proven, in-production use cases
    Look for examples of automation that’s already working, not promises on a product roadmap. Ask to see the AI in action.


    “Some competitors just pipe in ChatGPT and call it AI,” says Spieler. “That’s risky. Generative AI predicts responses — it doesn’t think. We add validation layers, confidence scores, and logic checks.”

If you're planning a full platform review or vendor search, it helps to work from a structured checklist. Download our GRC Buyers’ Guide for an in-depth evaluation framework.

Strike Graph makes AI-powered internal audits real. Our platform is built for automation, accuracy, and real-time compliance, so you can move faster, reduce risk, and stay audit-ready year-round.

At Strike Graph, we didn’t add AI as an extra — we built our entire platform around it. From GenAI to agentic workflows, our system helps compliance teams accomplish more with less manual work in less time. And it’s all secured by design.

Why Strike Graph is the fastest path to AI-powered compliance:

  • GenAI speeds up key tasks
    Draft security questionnaires, prep for audits, and control evidence faster and with less rework.

  • Secure by default
    Your data stays protected. We never send it to third-party AI systems without your control.

  • Real-time risk modeling
    Our platform adapts to your actual environment, not generic templates, so controls are always aligned with real business risks.

  • Automated evidence collection
    Secure integrations pull fresh data continuously, making audit prep and gap analysis automatic, not manual.

We don’t just make audits faster. We make them smarter so your team can focus on strategy instead of chasing documents.

It’s not just possible to do internal audits with AI; it’s already happening. Don’t use software that is cobbled together for AI; use one that was purpose-built for AI and has the architecture necessary for true agentic AI.

Let Strike Graph show you how. Demo our AI-powered compliance management platform today. 

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.