Learn how AI helps businesses manage risk, comply with policies, and make smarter decisions. Explore real-world examples in action today, and see how our experts envision the future of AI in governance, risk, and compliance
Executive Summary:
AI is reshaping GRC by enabling teams to work faster, identify risks earlier, and stay ahead of regulatory changes. From scanning policies and controls to drafting audit-ready reports, AI automates routine tasks, allowing teams to focus on strategy. Real-world use cases demonstrate that AI enhances decision-making, reduces errors, and shortens review times. AI can also continuously scan internal and external data, helping compliance teams move from reactive risk management and periodic check-ins to proactive strategies and continuous compliance. While AI won’t replace human judgment, it’s becoming essential for modern GRC programs. Tools like Strike Graph’s Verify AI and Security Assistant show how purpose-built AI can streamline audits and support compliance from the ground up.
What is AI in GRC?
AI in GRC enables companies to manage governance, risk, and compliance by efficiently sorting through large amounts of information. It doesn’t replace people, but it helps teams review documents faster, spot problems, and stay on top of ever-changing regulations.
GRC is the framework companies use to make sure they’re playing by the rules, whether those come from regulators, internal policies, or their own ethical standards. AI doesn’t run these programs, but it’s quickly becoming a key tool that supports the people who do.
Say a company updates its internal security policy. An AI system might cross-check that policy against half a dozen frameworks and suggest changes that align with ISO or SOC 2. It might also detect gaps in past audits or flag access logs that show unusual behavior after hours.
This kind of support doesn’t replace judgment, but it does help teams spend less time chasing down details and more time deciding what those details mean. As pressure builds to keep up with shifting regulations, many GRC teams see AI not as a future investment, but as a tool they need right now.
What is the role of AI in GRC?
AI plays a significant role in GRC by helping teams analyze large datasets, detect risks, and monitor compliance more efficiently than traditional methods allow. It handles time-consuming review work, such as checking policies or scanning system logs, so that GRC professionals can focus on judgment, interpretation, and response.
In most organizations, GRC teams face a mountain of documentation, including policies, controls, audit trails, and risk registers. AI helps lighten that load. Instead of reviewing files manually, teams can use AI tools to flag what’s missing, outdated, or potentially problematic.
As Micah Spieler, Chief Product Officer at Strike Graph, puts it: “The most meaningful shift AI will bring to the GRC space is the ability to make smarter, more nuanced recommendations in real time. For example, if a single control maps to 15 frameworks, AI can read all those frameworks, understand how they connect, and tell you exactly how to tweak that control. A human can’t keep up at that scale.”
That kind of help doesn’t make AI the decision-maker. The tools are fast, but not perfect. They don’t understand the business environment or regulatory nuance the way a human does. What they offer is speed, consistency, and pattern detection. GRC professionals still need to verify findings, weigh context, and decide what actions to take.
The goal isn’t to replace human oversight. It’s to free teams from routine review work so they can focus on strategy, risk prevention, and high-stakes decisions. Used well, AI becomes part of a feedback loop: it surfaces what matters, and people decide what to do about it.
How AI is transforming governance, risk, and compliance
How AI is impacting governance
In many organizations, AI is becoming a practical tool for helping leadership teams stay on top of complex information. It’s not there to make decisions, but it can help people make them faster and with better preparation.
AI doesn’t make decisions, but it supports the people who do. It helps leadership teams prepare by pulling key details from reports, highlighting areas that may need attention, and organizing information so it’s easier to act on.
Some companies now use AI to go through board discussions and draft short recaps. Others rely on it when a new law or standard comes out, letting the system compare it against internal rules and recommend what might need to be updated. In a few cases, the technology is used to watch for behavior shifts, such as unusual communication patterns, that could suggest something needs a closer look.
AI is also helping with document management. For example, a system might scan an outdated code of conduct and point out sections that don’t align with current expectations. These suggestions give leadership a chance to address risks while they’re still small.
At the end of the day, people still call the shots.
AI use cases for governance
More governance teams are experimenting with software that lightens the load, especially around research, documentation, and internal reports. The tools aren’t perfect, but when used right, they give decision-makers a better read on what’s happening inside the business.
A few examples:
- Executive support tools: McKinsey has developed a system that tracks internal performance, external events, and market data. It helps leadership stay ahead of changes, rather than reacting afterward.
- Meeting recaps: Some companies now have software that listens to board meetings and produces short summaries. This saves time for directors who’d rather focus on what was decided than re-read a transcript.
- Policy checkups: Some tools now scan internal policies against regulatory updates to spot mismatches or outdated sections. In a 2024 study published in the International Research Journal of Engineering Science Technology and Innovation, one company reported a significant improvement in policy adherence — up 95% — after deploying AI to monitor internal compliance more closely.
- Legal review at scale: JPMorgan implemented a tool that goes through loan documents in seconds. This replaced a process that once took hundreds of thousands of hours per year. Others are using similar tools to scan contracts and catch issues earlier in the process.
- Looking for trouble early: A few firms are trying AI that watches for small signs of misconduct, like out-of-character trades or red-flag language in internal messages. The idea isn’t to accuse, just to give someone a heads-up to take a look.
How AI is impacting risk management
AI is helping companies recognize threats and decide when to respond. In some situations, it immediately alerts teams to unusual activity. In others, it highlights vulnerabilities that might have stayed buried for months. The key shift is timing. More organizations are spotting issues early.
Many risk assessments still rely on long lists, vendor records, and system logs that someone has to comb through by hand. The challenge isn’t just complexity. It’s volume. There's simply too much information for even large teams to process in real time. Certain technologies now assist by connecting the dots, helping teams prioritize what to examine first.
That doesn’t mean people step aside. Risk managers still make the decisions. What changes is the pace: they’re no longer limited to scheduled reports or formal audits. With earlier warnings, there’s more time to act and more chances to prevent serious problems.
AI use cases for risk management
Software is starting to play an important role in risk management work. It doesn’t prevent problems on its own, but it can point teams toward trouble sooner than a spreadsheet ever could. In industries where hours matter, that kind of lead time can change the outcome.
Here’s how some companies are using these systems:
- Fraud signals at major banks: At Barclays, analysts have a tool that highlights payment patterns that seem off, such as amounts just below alert thresholds or activity from odd locations. Mastercard uses a similar process to monitor merchant behavior. After rolling it out, they tripled the speed at which they flag suspicious activity.
- Supply chain pressure points: When supply routes collapsed during COVID-19, Western Digital used modeling software to figure out which suppliers were falling behind. They were able to reroute early and ended up saving tens of millions. Maersk now does something similar to avoid delays before they build up at the docks.
- Third-party paperwork, done faster: Ernst & Young scans vendor files to spot missing terms and expired certifications. What used to take two weeks now wraps up in about a day, with far fewer back-and-forth emails.
- Watching for strange behavior on internal systems: Companies are beginning to look not just at whether credentials are valid, but how people use them. One firm watches for spikes in downloads or odd login times. If something feels off, it goes to IT for review—often long before anything goes public.
These examples don’t prove that AI solves risk. They show that it gives teams a few more chances to catch problems before they boil over.
How AI is impacting compliance
In many companies, compliance once meant periodic check-ins, scheduled audits, time-consuming reviews, and reactive fixes. That model no longer holds up. Today, some teams are turning to AI automation to monitor continuously, cut down on manual reviews, and respond in real time when something changes.
This shift is becoming more urgent. Regulatory frameworks are growing more complex, and the volume of data compliance teams must review keeps climbing. Some organizations are using AI to sort through it faster — flagging gaps, reviewing access changes, and checking that internal controls still match what the law requires.
Chris Ferrell, Chief Technology Officer at Valkit.ai, believes this shift is more than just incremental. He sees a break from the old model of compliance as a delayed reaction.
“AI can continuously ingest and analyze vast data streams, surfacing potential issues before they ever arise,” Ferrell says. “By auditing every policy change, user permission update, and workflow event against both external regulations and internal standards, AI can detect emerging risks with unprecedented speed and precision. This shifts risk management from a reactive scramble to a proactive assurance strategy, helping organizations resolve threats as they emerge.”
Some results are already measurable. A 2025 research paper titled “Harnessing the Power of Generative Artificial Intelligence (GenAI) in Governance, Risk Management and Compliance (GRC)” cites how one PwC case study found that GenAI tools were able to identify regulatory changes with 90% accuracy. They also helped reduce compliance-related mistakes by 75%. For the companies involved, that meant fewer errors, lower risk of penalties, and more time spent on strategic work, not paperwork.
AI use cases for compliance
AI helps compliance teams work faster by reviewing unstructured data, monitoring security controls, and automating audits. It also helps teams understand and comply with different rules across multiple frameworks.
Here's how organizations are using AI in compliance:
- Synthesizing “unstructured data”
Many compliance frameworks require organizations to gather “semi-structured” data, things like notes on how often board meetings are held, whether the company has internal security committees, and how frequently they conduct security reviews. This information isn’t structured in files like spreadsheets. Instead, it’s buried in emails, meeting notes, or policy documents.
“AI is a perfect fit for parsing and organizing unstructured data,” says Jay Bartot, a veteran tech entrepreneur who has built and sold multiple startups, served as CTO at Madrona Venture Labs, and is now co-founding an enterprise intelligence venture.
“One of the biggest leaps with modern AI, especially large language models, is how well they handle unstructured or semi-structured text,” he explains. “That used to be a weak spot in a developer's toolkit. You couldn’t throw raw text at a machine learning model and expect a deep understanding of the content. Now, with LLMs, not only can you do that, but it turns out they are remarkably good at understanding the meaning of the content.”
- Continuous monitoring and testing of internal controls
To confirm that internal safeguards are functioning, teams typically run periodic tests, often on a monthly, quarterly, or as-needed basis, typically during formal audits. These gaps can leave risks unnoticed. Some organizations now use AI to monitor system behavior more consistently. The software reviews access logs, data movement, and user actions to uncover patterns that may signal something is off.
“AI will help compliance shift from periodic testing to continuous compliance,” says Spieler. “That paradigm shift represents a huge game-changer that will help organizations maintain a stronger compliance posture overall, instead of just checking compliance boxes during quarterly security checks.”
- AI in internal controls and auditing
Gathering evidence to show that controls are effective is often slow and repetitive. Teams may need to locate files in multiple systems and verify them by hand. AI tools can take over much of this routine, sorting records, tagging key items, and matching them to audit requirements with greater speed and fewer mistakes.
- AI in regulatory compliance mapping
Managing compliance across several frameworks can get complicated fast. A single policy may need to align with multiple standards, including SOC 2, HIPAA, and ISO 27001, among others. Instead of starting from scratch with each one, some organizations use AI to compare requirements and spot where existing controls already meet multiple standards, or where something still needs to be added or revised.
As Spieler explains, “A single security control, like requiring multi-factor authentication, might show up across multiple frameworks, but each one phrases it differently or expects a different level of detail. For a team of humans, it could take a significant amount of time and expertise to read through all those frameworks, compare the requirements, and determine how they intersect. AI can act as a multi-framework expert, handling that complexity in real time and showing exactly what the company needs to change to comply with each framework.”
- AI in regulatory reporting
When reviewing vendor agreements or security questionnaires, teams must read closely to catch missing details or vague language that might pose a risk. With AI, many of these materials can be screened more quickly. If something seems off, like a missing clause in a SOC 2 report, the tool flags it so a human reviewer can dig in further. This makes it easier to vet third-party risk without falling behind on deadlines.
- AI in compliance audits
AI is increasingly used to assist with audit prep. Instead of pulling evidence manually or drafting summaries from scratch, teams can rely on tools that suggest control tests, match documentation to requirements, and highlight areas where something may not meet the mark. That allows auditors to spend less time on paperwork and more time on interpretation, deciding which issues matter most and how to fix them.

Technologies used in AI GRC
Some GRC platforms now incorporate advanced tools that help compliance teams keep pace with changing regulations and rising workloads. These systems rely on a mix of approaches—language models, graph structures, and task-specific code—to support different stages of the compliance cycle.
As Ferrell explains, these tools work best when they’re applied thoughtfully.
“Our system uses different AI techniques, including models that understand data and algorithms designed for safety and compliance tasks,” he says. “By combining these tools, we can automate much of the validation process, helping customers get faster and more accurate results that meet strict industry standards.”
The following technologies play a key role in how modern GRC software operates. Together, they help teams manage policies, test controls, review risks, and prepare for audits more efficiently.
- Natural language processing (NLP)
This branch of AI allows systems to read and make sense of written material—laws, contracts, audit trails, and other documentation that compliance teams work with daily.
- Machine learning (ML)
ML uses prior data—like past audits or testing logs—to identify patterns. Based on that, it can suggest improvements, highlight red flags, or assign risk levels.
- Large language models (LLMs)
Tools like GPT-4 can handle freeform writing. They help teams summarize meeting notes, draft risk statements, or prepare policy updates.
- Task-specific algorithms
These are designed to solve focused problems, such as determining how a single control maps across several frameworks, building a test plan, or verifying whether evidence aligns with requirements.
Ferrell from Valkit.ai notes how tailoring these tools to the job makes a difference.
“We built our AI models for specific compliance tasks, like reviewing controls or validating evidence, and they integrate relevant contextual data, which helps speed up every step,” he says. “This modular approach ensures customers get real value from AI, rather than using it as a generic tool that doesn’t fit their needs. And now that enterprise teams are more comfortable with AI, the conversation has shifted from if they’ll use it to how they can get the most out of it from the start.
- Graph databases and knowledge graphs
These data models help map out how rules, risks, and requirements connect. Instead of reviewing each control in isolation, the system can look at how everything fits together.
- Generative AI for drafting
When writing policies, test steps, or audit summaries, teams now use text-generation models to produce first drafts. Review is still essential, but it saves time at the start.
Benefits of AI in GRC tools
GRC teams are under pressure to move faster, do more, and manage growing complexity. Some organizations are now using AI-based tools to meet that demand, cutting down on manual work, uncovering risks sooner, and improving the way decisions are made across the business.
Below are several areas where teams are seeing clear gains:
- Governance
‣ More informed decisions
When executives need to make high-stakes calls, AI can help by pulling together relevant data, spotting patterns, and presenting key details. While humans still make the judgment calls, the tools help cut through the noise and highlight what matters.
‣ Time savings and better use of resources
AI has been used to handle tasks like pulling reports, checking compliance deadlines, and reviewing controls, things that usually take hours of staff time. In Deloitte’s “State of Generative AI in the Enterprise,” a global survey of more than 2,700 senior leaders, 56% said the main reason they adopted AI was to improve productivity and efficiency.
- Risk Management and Compliance
‣ Earlier warnings
Some tools scan behavior logs, user activity, or transactions to flag things that don’t follow expected patterns—giving teams an earlier chance to respond before a problem grows.
‣ Less manual work on controls and documentation
Reviewing controls, scanning contracts for required terms, or checking whether certain protections are in place used to require hands-on review. AI can handle much of that groundwork, letting teams focus on the tougher judgment calls.
‣ Clearer insights from messy data
In large organizations, useful signals often get lost in long emails, PDFs, or system logs. AI can pull meaning out of that mess—connecting issues like sales dips to operational hiccups, or showing where different risks tend to overlap.
‣ Writing the first draft
Drafting summaries, control evaluations, or risk statements takes time. Some teams now use AI to create first-pass drafts that compliance staff can then review and adjust.
‣ Staying ahead of changes
Rather than checking in on systems once a quarter, some organizations now use tools that flag shifts as they happen. This keeps compliance from falling behind and helps teams respond more quickly when a policy, law, or internal condition changes.
‣ Getting through updates faster
When a new law or framework goes into effect, AI can speed up the work of matching existing controls to new rules, creating new test cases, or updating policy language.
‣ Doing more with what you have
Instead of hiring more staff to keep up, teams are using automation to get more done. That includes document review, policy updates, compliance checks, and risk scoring—freeing up time for strategy and higher-value work. See our related article on how AI-powered GRC is important to your business growth.
Challenges of AI in GRC tools
While AI offers powerful advantages in GRC, it also comes with growing pains. Some risks are technical, like unclear decision-making logic. Others are cultural, practical, or ethical.
Below is a look at where teams often run into friction:
- Lack of transparency in outputs
One of the more persistent concerns is the difficulty in explaining how certain AI systems reach their conclusions. This becomes especially tricky in compliance, where individuals must justify their decisions to auditors, regulators, or internal leadership.
Bartot explains: “We don’t really know how these large neural networks work. Research into that is just getting started. Until we can open up that black box, regulation is going to be difficult. It’s hard to control what you don’t understand.”
In high-stakes settings, that lack of clarity makes it hard to fully trust AI’s recommendations, let alone rely on them without human review.
- Limited depth in reasoning
Even when outputs sound plausible, AI may overlook context or suggest solutions that don’t hold up under scrutiny. Stanford’s 2024 AI Index report notes that current systems still fall short on complex reasoning tasks, especially in areas that require domain-specific judgment. Since GRC often demands just that, teams need to double-check outputs and rely on their own expertise to avoid missteps.
- Bias in the underlying data
If an AI system is trained on flawed or skewed information, it may repeat those flaws, producing unfair or incomplete results. That’s not so different from what happens when people rely on outdated references. The key difference: AI can scale those errors fast. That’s why teams need to vet training data and remain vigilant for unintentional bias.
- New types of compliance risk
AI doesn’t just help with compliance; it creates its own oversight challenges. As Bartot points out, “There’s a whole ecosystem of guardrail tools popping up now — tools that limit what a model can say or restrict certain types of user queries. I wouldn’t be surprised if we see GRC products pop up down the line that audit those guardrails, just like any other security or compliance measure.”
That means organizations will need to track the AI itself, not just what it’s used for.
- Teamwide skepticism or hesitation
Adopting AI isn’t just a technical shift. It’s cultural. Some employees lean in immediately, while others hold back or mistrust the tools.
“One of the biggest challenges in GRC is getting over the skepticism of AI,” says Spieler. “Some people are fully embracing it, automating tasks and moving fast, while others are still stuck, struggling with routine tasks. To fully leverage AI, we need to have all the teams on board to overcome cultural resistance and convince people to work alongside AI.”
- Difficulty integrating with legacy tools
Older systems aren’t always built to work with modern AI. Trying to plug new tech into custom workflows — or into platforms that haven’t been updated in years — can require time, expertise, and a fair amount of troubleshooting.
- Trouble keeping track of version changes
AI models evolve frequently. That creates challenges for teams trying to audit behavior or reproduce a result. If you don’t know which version of a model produced a particular output, it’s hard to assess responsibility. Some companies now use version tracking systems, like Managed Components Protocols (MCPs), to stay on top of that history.
- Jumping in without a clear goal
In some companies, the push to “use AI” comes before anyone’s figured out what it’s supposed to solve. That approach often backfires, adding more steps or complexity without improving results.
Ferrell puts it clearly: “The key is to deploy AI thoughtfully: every integration should enable our customers to deliver real, measurable value, rather than being AI for AI’s sake.”
AI regulation
In 2025, AI regulation is a patchwork of national laws, voluntary standards, and international guidelines. The EU has passed a binding law, but most countries, including the U.S., have not. Many organizations follow frameworks such as NIST and ISO to help manage AI risk, ethics, and oversight.
The pace of change remains a significant challenge. Most companies now track both formal regulations (“hard law”) and voluntary frameworks (“soft law”), which may not be legally binding but often shape industry expectations. These include standards used in audits, vendor assessments, and procurement decisions.
As Bartot puts it: “There are people on both sides of the political spectrum issuing dire warnings about AI, but there’s no unified regulatory framework. Under the Biden administration, there was some momentum towards safety, guidelines, and standards. Now, it’s the Wild West.”
He adds: “Meanwhile, AI is being used everywhere, oftentimes surreptitiously by individuals who recognize its great (but still imperfect) utility. It’s a bottom-up adoption, not a top-down rollout where businesses and institutions set rules before the technology reaches the public.”
Because the legal landscape is still shifting, many GRC teams are choosing to align with the most established and demanding standards now, so they’re ready when voluntary guidelines become enforceable.
Here’s a summary of the major regulations and industry standards affecting AI GRC:
- United States (Federal)
- AI-specific laws:
- As of May 2025, the United States has not enacted a comprehensive federal law regulating private-sector AI development or deployment.
- In January 2025, the Trump administration signed an executive order that reversed a Biden-era directive directing federal agencies to implement safeguards for AI use.
- Existing Data and Cybersecurity Laws
Other regulatory frameworks like NIST 800-53, NIST 800-171, and the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC) aren’t AI-specific but teams often apply them to AI systems because they cover data privacy and security, cybersecurity, and risk management.
- NIST AI Risk Management Framework (AI RMF)
- The National Institute of Science and Technology is a federal agency that regulates technology and cybersecurity standards. Many NIST standards often become international guidelines.
- Released in 2023, the NIST AI RMF provides voluntary guidance across four pillars: “Govern, Map, Measure, and Manage.” While it’s technically a voluntary framework, it has become a de facto standard for organizations building AI systems, especially in regulated industries or those involved in federal contracting.
- European Union
- The Artificial Intelligence Act (2024)
- In 2024, the EU introduced the “Artificial Intelligence Act.” The EU AI Act is the world’s first comprehensive, legally binding law regulating the use of AI.
- The Act bans certain high-risk applications, such as manipulative AI systems and public facial recognition, and imposes strict controls on others. For example, companies offering AI products in the EU must register their models, implement robust data governance, and provide transparency to users.
- The EU is phasing in compliance over a two-year period and expects organizations to be fully compliant around 2026.
-
- Canada drafted the “Artificial Intelligence and Data Act,” but the legislation did not pass parliament as of early 2025.
-
- Interim Measures for Generative AI (2023)
-
-
- In 2023, China issued the “Interim Measures for Management of Generative AI Services.” Anyone using AI in China must conduct regular security assessments, register their algorithms, and explicitly tell users when they’re using AI-generated content.
- International and Voluntary Frameworks
-
-
- The Organization for Economic Cooperation and Development (OECD) is a forum comprising 38 member countries that collaborate to develop policies that promote sustainable global economic growth.
-
-
- In 2019, the OECD adopted AI principles that major economies, like the US, the EU, and the UK, endorsed. The OECD AI principles were the first intergovernmental standard on AI, and influenced many subsequent standards and guidelines. The standards promote fairness, transparency, accountability, and human-centric AI use.
-
- NIST AI Risk Management Framework (RMF)
- See U.S. section above. Many global organizations adopt the NIST framework to create unified AI governance practices across jurisdictions.
- ISO/IEC AI Standards
- The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are global standard-setting bodies that create technical and management standards across industries. They often create joint management frameworks for technology and management systems.
- In late 2023, the ISO and IEC released “ISO/IEC 42001:2023.”
This framework is modeled after ISO 9001 (quality management) and ISO 27001 (information security). It outlines how organizations should structure policies, assign responsibilities, monitor risks, and ensure lifecycle accountability for AI systems. Organizations can certify against this standard to demonstrate responsible AI oversight.
- ISO/IEC 23894:2023 offers practical guidance for identifying, analyzing, and mitigating AI-specific risks. It builds on ISO 31000 (general risk management) and tailors it to AI’s unique challenges, such as algorithmic bias, lack of transparency, and data drift.
- Though voluntary, ISO standards carry weight in procurement processes, third-party audits, and regulatory alignment. Many multinational organizations treat ISO conformance as evidence of due diligence, and some regulators refer to these standards as best practices in evaluating AI risk governance.

Ethical considerations and frameworks for AI in GRC
When organizations use AI in governance, risk, and compliance work, ethics can’t be an afterthought. Whether the tools are making policy suggestions, scanning contracts, or identifying risk signals, teams need to know the tools align with human values — and that the people using them can explain the results.
A major concern, especially in the realm of compliance, is that many AI systems fail to disclose how they arrived at a particular conclusion. That lack of clarity makes it hard to review the logic behind a decision, and even harder to defend that decision when challenged.
Mistakes in this space aren’t just technical. A missed flag or a poorly explained outcome can lead to a breach of privacy, a failed audit, or a regulatory violation.
To help navigate these risks, several international organizations have developed ethical guidance:
- OECD AI Principles (2019)
These principles, created by the Organisation for Economic Co-operation and Development, encourage fairness, transparency, and respect for human rights in the design and use of AI. More than 40 countries have formally endorsed them.
- UNESCO AI Ethics Recommendation (2021)
UNESCO’s guidance expands on these ideas and adds others, including sustainability and cultural sensitivity. The goal is to promote AI that benefits society as a whole, not just individual organizations.
- General Data Protection Regulation (GDPR)
While primarily a data privacy law, GDPR also applies to AI systems that use or process personal data. It affects how companies structure consent, storage, and access, and in some cases, whether automated decisions are even allowed.
These frameworks lay out the “why” behind ethical AI. For the “how,” technical standards like NIST’s AI Risk Management Framework and ISO/IEC 42001 are starting to take hold:
- NIST AI RMF offers a practical structure for managing AI systems responsibly across their lifecycle.
- ISO/IEC 42001 is a certifiable standard that gives companies a way to demonstrate that their AI governance is structured, repeatable, and aligned with best practices.
The point isn’t to follow every guideline blindly — it’s to think carefully about how AI is used and make sure the risks are understood and managed.
In a recent episode of Secure Talk, Strike Graph CEO Justin Beals joined AI privacy expert Dan Clarke to talk through these challenges. The conversation focused on how companies can adopt AI in ways that keep privacy and accountability front and center, without giving up speed or innovation in the process.
Future of AI for GRC
In the future, AI will transform GRC by managing workflows, analyzing data, forecasting risks, and simulating decisions. Virtual auditors and risk advisors will work alongside human teams to deliver faster, smarter, and more proactive insights and decision support.
Here’s what experts expect the future of AI in GRC to look like:
- AI-driven compliance automation
AI models are increasingly capable of assisting with, or even leading, parts of the audit process. They can handle compliance monitoring and reporting, verify evidence, and streamline the entire audit workflow.
“There’s no reason an AI model couldn’t assist with, or even lead, an audit,” says Bartot. “It could review documents, verify evidence, and streamline the entire process. It is arguably already as sophisticated as an average auditor.”
- Real-time risk intelligence
The future of GRC involves AI systems that continuously ingest and analyze vast data streams to surface potential issues before they arise. Ferrell says, “I envision real-time intelligence that continuously ingests and analyzes vast data streams, surfacing potential issues before they ever arise.”
- AI as a strategic advisor
AI may play a significant role in strategic decision-making by providing executive-level insights. By capturing and analyzing a company's history, AI can summarize lessons learned, identify trends, and assist leaders in making data-informed decisions. This support has the potential to change both everyday roles and how executives make decisions.
- Specialized roles
“I see a future where AI tools are built for specific roles—like AI auditors, AI agents, AI compliance reviewers, or AI content generators explains Ferrell. “Humans will guide the system: training the models, setting the rules, and reviewing the outputs. AI will handle the heavy data work—spotting anomalies, surfacing risks, and running analyses—while humans focus on strategy, governance, and nuanced judgment calls.”
Will GRC be replaced by AI?
It’s unlikely. AI may change how GRC teams work, especially when it comes to handling large volumes of information, but it won’t replace the people behind the programs. Tasks like monitoring, testing, and documentation may shift to machines, but decisions still need human context.
What AI offers is faster input. What it can’t offer is judgment.
Will AI replace compliance?
Unlikely. AI tools can take over routine steps — scanning policies, testing controls, or pulling risk data — but someone still has to interpret the results. Compliance isn’t just about checking boxes. It’s about knowing when something matters and why.
Spieler says that while a future with AI auditors is very likely, AI tools won’t function as autonomous systems.
“I don’t see a future where we completely remove humans from the compliance loop,” he says. “AI can’t see the full picture of a business. It might notice a missing control and flag it, but without understanding the broader business context — like specific regulatory pressures or local issues — its impact is limited.”
That view highlights why most teams will continue using a human-in-the-loop model. AI might spot patterns or surface red flags. But it still takes a person to weigh what those signals mean, apply them to the company’s environment, and decide what happens next.

Strike Graph has built AI into the core of its GRC platform, not as an add-on, but as part of how teams carry out compliance tasks from day to day. The goal is to help organizations reduce manual review, identify what’s missing faster, and move through audits with more confidence.
Here are some ways Strike Graph’s AI tools are being used:
- AI in internal audits
Verify AI, the company’s internal audit assistant, works with frameworks like SOC 2 and ISO 27001. It checks controls and their supporting documentation, highlights missing pieces, and can suggest revisions or generate new control language where needed. Strike Graph also offers a companion tool, Security Assistant, which reviews findings and helps teams close gaps right away.
- Control monitoring
As teams prepare for audits or update their compliance posture, Verify AI reviews control requirements and suggests specific types of evidence that would meet them. For example, it might prompt a team to add an access review report for ISO 27001 or include a training log to meet SOC 2 expectations.
- Evidence management
Instead of asking teams to track supporting documentation manually, the system helps match each control to appropriate proof. If a piece of evidence is missing or mismatched, Verify AI flags it and recommends what to include—reducing the risk of gaps that could cause delays or follow-up questions during an audit.
- Managing across multiple frameworks
Strike Graph’s platform uses a graphical structure to show how different frameworks intersect. It maps how controls relate to each other across SOC 2, ISO 27001, and others, so teams can spot overlaps, resolve conflicts, and align requirements without having to manage everything from scratch. That kind of visual guidance makes complex compliance work more manageable.
Examples of AI-powered GRC Tools
How Strike Graph AI-powered GRC tools improve efficiency and facilitate audit readiness
Strike Graph’s AI-powered GRC platform helps businesses manage compliance tasks faster and with less effort, freeing teams to focus on growth and strategy.
Its tools, Verify AI and Security Assistant, scan frameworks, flag gaps, suggest fixes, and guide teams through complex work. Soon, Strike Graph’s AI will serve as a full compliance manager, running enterprise compliance programs end-to-end.
Strike Graph took an AI-first approach from the start, building AI into the core of its system from the ground up. While competitors bolt AI onto legacy systems, Strike Graph is purpose-built for enterprise-grade compliance challenges.
Think of Strike Graph’s Verify AI as your trusted internal auditor. It knows the details of every framework, from PCI DSS to CMMC and beyond. It uses that knowledge to identify gaps and produce fact-based reports. Security Assistant, the second branch of Strike Graph’s AI tools, works alongside Verify AI as the consultant. It turns Verify AI’s insights into clear, actionable steps that help teams stay audit-ready and fix issues quickly.
Today, Strike Graph’s AI tools help simplify audits and guide GRC teams through complex compliance tasks. Soon, the AI will power a system that runs quietly in the background, offering continuous monitoring to flag risks before they escalate.
Strike Graph is leading the charge on integrating AI into GRC systems. Looking ahead, Strike Graph’s AI-powered platform will become a full-time, dedicated internal auditor, managing enterprise compliance programs end-to-end so teams can focus on strategy, vision, and growth.