Special Episode: The Secure Talk Security Awareness Training 2025 (With HIPAA!)

June 24, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

SecureTalk 2025 Security Awareness Training | Complete Compliance Guide

Welcome to SecureTalk's comprehensive 2025 Security Awareness Training video! This annually updated training is designed to help organizations meet their security compliance requirements while building a strong security culture.

🎯 What You'll Learn:

Social Engineering & AI-Enhanced Threats

  • Advanced phishing detection in the AI era
  • Voice and video deepfake attack recognition
  • Financial verification protocols to prevent fraud
  • Healthcare data protection against social engineering

Cloud Security & Infrastructure

  • Common cloud misconfigurations and prevention
  • Secure AI model development and deployment
  • Financial data protection in cloud environments
  • Package dependency management and vulnerability scanning

Supply Chain & Third-Party Risk

  • Vendor security assessment frameworks
  • Zero Trust architecture implementation
  • HIPAA compliance for business associates
  • AI vendor risk evaluation checklists

Insider Threats & Hybrid Work Security

  • Behavioral analytics for threat detection
  • Environment-adaptive security controls
  • Data loss prevention in remote work settings
  • Segregation of duties in digital workflows

Regulatory Compliance & Automation

  • 2025-2026 regulatory calendar overview
  • Control-centric compliance approach
  • Continuous monitoring and automation strategies
  • Multi-framework compliance alignment

Building Security Culture

  • Security mindset vs. rule-following approach
  • Positive reinforcement security programs
  • Organizational security maturity models
  • Leadership's role in security culture

💼 Compliance Frameworks This Training Addresses:

  • SOC 2 Type I & II
  • ISO 27001
  • HIPAA & Healthcare Security
  • PCI DSS
  • CMMC (Cybersecurity Maturity Model Certification)
  • GDPR & EU AI Act
  • AI Accountability Act
  • NIST Cybersecurity Framework
  • State privacy laws (CCPA, CPRA, etc.)

🏆 Perfect For:

  • Annual security awareness training requirements
  • Compliance audit preparation
  • New employee onboarding
  • Security culture development
  • Multi-framework compliance programs

🎓 Certification Available: Complete the training and receive a certification of completion for your compliance documentation. Link provided at the end of the video.

📺 About SecureTalk: SecureTalk explores critical information security innovation and compliance topics. Hosted by Justin Beals, founder and CEO of StrikeGraph, featuring expert insights from cybersecurity professionals across finance, healthcare, engineering, and compliance.

🔔 Subscribe for more security insights and compliance guidance!


Chapters: 0:00 Introduction & Training Overview 3:18 Social Engineering with Steven (IT Compliance Expert) 15:00 Advanced Threats with Kenneth (CISA, CISSP) 30:30 Cloud Security with Josh (Head of Engineering) 44:55 Insider Threats with Elmi (Assessments Manager) 49:09 Regulatory Compliance with Micah (Chief Product Officer) 1:01:42 Security Culture with Juliet (CFO)

#CybersecurityTraining #SecurityAwareness #ComplianceTraining #SOC2 #HIPAA #ISO27001 #SecurityCulture #StrikeGraph #SecureTalk

View full transcript



Justin Beals: 

Hello everyone and welcome to SecureTalk. I'm your host, Justin Beals. For those of you that have come to this video for your security awareness training, usually in service of some security compliance outcome, stay to the end of the video. There'll be a link. You can use the link to receive back a certification of completeness so that you can use that in evidence in your compliance outcome for our normal listeners that are a little curious about what we are up to this particular week. 

 

We've gotten a lot of requests from folks that are listening to the channel and the work that we do, as well as the broader community, to produce kind of an annual security awareness training video, something that can be consumed by teams that are focused on general areas of cybersecurity or security broadly for the organization. 

 

Usually, we are doing this in service of some form of compliance outcome. Almost every framework I've ever read or law about security or data privacy has some information about training your team about its effective implementation. So this episode is designed for that audience to come and receive a little bit of security awareness training. For our professionals in the security space, if you also need to develop security awareness training videos, you and I would probably agree that the most effective security awareness training is the one that's designed around your business. At least that's the way that I've found it to be true. In that we cover topics that are germane to the risks of our operation, the types of customers that we like to support, the data that we store, and also just best practices internally and the type of culture we want to develop. 

 

So I certainly recommend that particular path for those looking for security awareness training, but we also turned on remix capabilities on this video. So please feel free to use ourselves as an asset in developing yours. 

 

And finally, one of the things that I'm really excited about this particular week is I get a chance to introduce some of my teammates at StrikeGraph. StrikeGraph goes through a number of different compliance outcomes our customers expected of us. We have a heavy focus on effective security practices, of course.

 

And what we find is that security is practiced by our whole organization with different roles and responsibilities in different groups. The trick is tracking it well. And so in an effort to kind of highlight how we handle that broadly across the group, you're going to get a chance to meet my teammates, everyone from our CFO to folks that work on audits and assessments, and hear about their concerns for security awareness and making sure that we are a strong business.

 

Anyways, I'd  love to welcome you all to our 2025 Security Awareness Training video. I hope you enjoy. Thanks. 



Social Engineering, with Stephen Ferrell

 

Justin Beals:

Steven, thanks for joining us today on SecureTalk to kick off our 2025 Security Awareness Training.

 

Stephen Ferrell: 

Thanks for having me, Justin. Pleasure to be here.

 

Justin Beals: 

Excellent. You know,I think this particular episode on SecureTalk came about in a little bit of a different way. It was our YouTube listeners and followers that were making requests for SecureTalk to produce a kind of annual security awareness training video. And I couldn't think of anyone better to help us introduce the topic broadly. Steven, why don't you just tell us a little bit about what you do and your background?

 

Stephen Ferrell: 

 

Sure thing. So I've been in the IT compliance space for close to 25 years. Big bulk of that time in highly regulated industries, primarily life science. My career is going to take me across the different compliance wavelengths in that space from cybersecurity all the way through product compliance and testing systems that support manufacturing of drugs. I've been very involved in industry, had the opportunity to be a contributing author to the Good Automated Manufacturing Practice Guide or GAMP, as it's called, second edition. I led the effort on the IT infrastructure, good practice guide. And similarly, I was the chapter lead for the cybersecurity chapter and the risk chapters of the upcoming AI guide. So very involved in industry, security is a very important thing near and dear to my heart, but of course, important to anyone doing anything these days.

 

Justin Beals: 

Yeah. You know, you and I have worked together for quite a bit, and I think we share something in our perspective around security that it's just kind of fundamental to operation excellence. Like I think we've in the past wanted to segment security from operations, but they are really the same thing across the business 

 

Stephen Ferrell

One- hundred percent they are and this interest in just because I think if you often you talk to people and you say hey look here's ISO 27001 or here's SOP2 and they'll have a read at it for the first time and the response is often well that's just good practice and they're right may as just good practice but the reason that these guidances and these standards exist is because as humans we like to do our own practices right I think that if you follow you know frameworks that are out there, the chances are that if you're true to them, that you'll achieve operational efficiencies and maintain security, right? They're not mutually exclusive. 

I think when folks head into the trap where their security policy becomes shelf where it's something they roll out when an auditor shows up, the chances of them having a breach, the chances of them, you know, having financial loss or reputational loss just just increase. So it's really important to understand that, you know, operations and security are very much hand in hand and done right. Security can improve your operations.

 

Justin Beals: 

One of the things that I kind of rail against, and I think is something we tend to do, we're just, it's a habit we have, is that we think a piece of cybersecurity technology is going to solve our security issue. But the problem there is that we have to operate anything we buy. And if we don't operate it in a secure manner, it won't have any effect on our overall security as well.

 

Stephen Ferrell:

Yeah, absolutely is the old adage, right? A fool with a tool is still a fool. And I think, you know, that's that's a big challenge, especially with security. It's like me putting a lock on my front door and then never locking my door, right? Like it. Yes, I have a lock, but I've never used it right. So without the understanding of what the purpose of the tools are, what the limitations are, what their scope is, people can be drawn into a false sense of security, which often is worse because you're sometimes more exploited than you would have been otherwise.

 

Justin Beals: 

And then certainly the danger is still dramatic. I don't think we've seen any decrease in the impact, the financial impact or the reputational impact of major breaches, and we've seen some really dramatic breaches. You know, can you talk to us a little bit about how some of these major breaches or how's these security failures kind of translated into true impacts for these businesses and their customers?

 

Stephen Ferrell: 

Yeah, I think there's layers to it, right? Like there's the immediate impact of a breach from the standpoint of, okay, now, you know, have we lost customer data? Right, so everybody now panics about their personal data. The next piece is, do we still have data integrity? The data that remains, even if we can still access it, is it still correct? Right? Does it have those, you know the Alcoa principles, right? Is it attributable, legible, contemporaneous, original, and accurate? The minute you have a breach, the minute a third party gets into your data, all of that is in question. 

 

Not suggesting that it's out the window, but it's certainly in question. You have the immediate question: Where is my data going? You have the challenge of whatever you have left. You have the lingering question about whether the attacker is still in your system somewhere and whether you still have, you know, potential vectors for them to know, hit you again or for someone else to do so. So there's, you know, really, it's like, you know, getting punched in the face by a heavyweight boxer, right, you're going to be reeling from all the peripheral things that come with the attack. Then, you know, once you gather yourself up from that, the reality of what it means to your business from a market perspective, right, it's going to impact your stock price if you're public.

 

It may start to impact the bottom line as far as loss of trust from customers. There is just so many layers that make security awareness and security preparedness just so absolutely critical to any business.

 

Justin Beals: 

And certainly I like, I think, where you're going and where a lot of our industry has gone is that it's not just about implementing good security, but understanding that breaches do happen, security incidents do happen, and we need to have operational capacity to act in a resilient manner. And that concept of resilience that you point out is critical to operations and security.

 

Stephen Ferrell: 

Very much so, right?. I think if we boil it down to some of the things that we deal with at Strike Graph, right, you've got the concept of having a process or a control, and then that process or control being provable and then having tools. You really need all of those elements. You can't have one or the other. You can't have a process that says, if there's a breach, this is what we do. But then you never test that process, and you don't have the capacity to execute it, right?

 

Then it's pointless. Similarly, having the tools without the controls is also pointless because, you know, back to the door analogy, if you've got a door with a lock and no key, you know, what's the point, right? there's, you cannot take any element out of this to have a true security posture and feel good about it.

 

Justin Beals:

You know, it sounds very dire and scary, but I think one of the things you and I know, and this is deeply proven in the market, is that if we think in this diagrammatic motion, having good security leads to a lot of business success. And so, you know, there's an opportunity here as well to be more successful at business by operating good security.

 

Stephen Ferrell: 

Yeah, very much so. And I like that this diagram has security sort of as the foundation as opposed to something that's, you know, on the side or tacked on, right? Because it really is, you know, every element of your business, every system that you use, every human interaction with data has an impact on security. So building a robust set of controls around that, you know, gives you the foundation where you can, you know, have comfort in your operations and then drive those hopefully towards business success. The reality is the best that we can ever hope for is to drive risk to as low as reasonably possible. 

 

We're never going to fully eliminate it, but as low as reasonably possible, you know, allows us to at least, you know, go to bed at night feeling fairly confident that something doesn't go wrong. have a plan to quickly remediate it and recover.




Justin Beals: 

Yeah. And certainly something we've seen from a business success perspective is a lot of our customers translate the security they operate into credentials in the broader marketplace. They drive trust. They new revenue, opportunity for expansion, new markets to work with. So I think that's interesting. have you seen it as well over the last, I think, five years, maybe plus some securities become a revenue driver.

 

Stephen Ferrell:

100%. I think if you're a SaaS company today, whether you're selling into the life sciences or aviation or the DIB, even if you're making video games, I'm not sure that anybody would take you seriously anymore if you don't have at least one security certification. It really has become a cost to do in business and a necessary cost. Anytime that you're gathering up information about somebody or you know that could otherwise be exploited. 

 

I think it's a reasonable expectation that you go ahead and you know prove to a third party that yes indeed you have your act together. So yeah it's really become a default now I think for anybody in that kind of X as a service space.

 

Justin Beals: 

Excellent, Stephen. Well, I think this has been a great kickoff to our security awareness training for 2025. We really appreciate you sharing your expertise and insights.

 

Stephen Ferrell: 

Thanks Justin, appreciate it.

 

—-

Advanced Threats with Kenneth Webb: 

 

Justin Beals:

Welcome, Kenneth, to our security awareness training. We're really grateful to have you join us. Tell us a little bit about what you do at Strike Graph.

 

Kenneth Webb: 

Hey Justin, thank you for having me today. I'm the head of cybersecurity compliance assessments on Striva. We work together with our customers in areas like SOC 2, HIPAA, PTI, penetration testing, all the information security out, it's a very exciting thing what we do.

 

Yeah, and I think one thing you've done in our time working together is completed your ISACA certified information systems auditor, which is really awesome. It helps lead our team.

 

Yeah, and also the Certified Information Security System Professional, or the CISSP, has been...




Justin Beals:

That's amazing. Yeah. Yeah. Well, today you're going to help us learn a lot about social engineering. I know from some of my research work that social engineering is about, you know, is a part of between 70 and 80 percent of most breaches. I mean, it really is a big part of almost any attack that can go on. And let's talk about something really specific, which we have up here on the deck, this email phishing attempt.

 

This is pretty frightening. These are very similar emails. It's almost impossible to tell the difference.

 

Kenneth Webb: 

Yeah, and that's exactly the problem. Like, AI has dramatically increased the sophistication of the phishing attempts. So the fake email that you see on the right of the screen was generated by AI. And basically, AI learned about legitimate communications in your organization. So you can notice things like perfect grammar, appropriate context, and even writing styles that match your corporate culture.

 

However, those traditional red flags that we were used to see, like spelling errors or some little mistake here and there are gone. In 2025, employees need to verify those suspicious requests through secondary channels, even when the message looks like this, like it's perfectly legitimate.

 

Justin Beals: 

Yeah, so this is a big change in our operations. It's almost always verify. Exactly. If you received a request that has an impact on whether or not it threw a second channel, on whether or not that was actually requested. I mean, it is getting more sophisticated. We're seeing voice and video-based attacks even. How did these work?



Kenneth Webb: 

Well, the voice did fake attacks are particularly dangerous because we tend to trust what we hear. So if you look at those waveform patterns on the screen, you will see that they are almost identical, right? The difference between the real and the fake are nearly imperceptible to the humor here.

 

We've seen cases where financial staff receive calls supposedly from their CEO requesting urgent wire transfers. The technology that creates these fakes is now widely available. So basically, same thing with phishing emails. Quartermesters include implementing verification protocols, unique security questions, or callback procedures.

 

Regardless, if it is a very convincing call, what you get, it's better to fall into the overprotective side of the fence than giving your money away because you think, yeah, he was my CEO, was my request in that.



Justin Beals: 

And, you know, in my research, almost all of these social engineering attacks are generally to steal something, data or money, of course. And if we look, you know, from a financial department perspective, there's a very particular vulnerability with technologies they might have been used to in the past, like a phone call or something along that. What special precautions should they take?

 

Kenneth Webb: 

Well, for financial teams, they should implement what we call trust-breaking points in their workroom. Basically, if you can see this diagram, notice how even after the initial approval, we have this mandatory out-of-band verification step for transactions above certain threshold. So this might be, as you said, a phone call to a pre-established number or separate authentication app for confirmation. 

 

So these breaking points prevent that continuous social engineering attack where the attacker tries to walk someone through an entire transaction process. So those little breakpoints in the process will help us mitigate the attack before it's finished.

 

Justin Beals: 

It's interesting that one of our solutions to these social engineering problems are better behavior by us as human beings.

 

You know, certainly some of the most valued data is patient healthcare information on the black market. It gets sold and bought a lot, but a lot of data needs to be kept private with similar types of verification processes. Here's an example of healthcare verification processes. Can tell us a little bit about how we're attempting to solve for social engineering here?

 

Kenneth Webb: 

Absolutely. as you said, patient data is extremely valuable. And social engineers know it, right? So in this flow chart here, we show how healthcare staff should handle data access requests. 

 

Notice the verification tripwires confirming the requestor's identity through multiple factors, validating the clinical necessity of that information, and also logging those access patterns so they can be audited later. So the key difference in healthcare is that urgency is often legitimate. Like here, we are already talking about human life that can be in danger. So the protocols needs to be secure and also efficient at the same time so we don't compromise the patient care.

 

Justin Beals: 

I think this also highlights just a broader good practices from a security perspective, which is the concept of legitimate use, right? Do you need this data? Is there a good reason? Sometimes we can feel like we're going to offend someone or be offended if we're asked why we need data, but that's not a bad thing to ask, it?

 

Kenneth Webb:

Not at all. Basically, the need-to-know principle should apply now more than often.

 

Justin Beals: 

And so of course we're all very interconnected, not only as an organization, but outside of ourselves. And I think one of the other topics that you're going to help us with here is supply chain vulnerabilities. So, you know, our organization is connected with a lot of other computers, technology systems, companies, and without them, you know, honestly, we wouldn't be able to deliver on our solution. This looks really impossibly complex.

 

You know, how are organizations supposed to manage this level of interdependence?

 

Kenneth Webb:

 

So the complexity is precisely the challenge. The average enterprise now has over 300 third party vendors with some level of system access or data sharing. And each of those vendors has their own suppliers. So the chain extends as we can see. So a vulnerability anywhere in this web affects everyone. So organizations needs to map these dependencies, categorize those vendors by risk level and implement continuous monitoring. So you need to remember that your security is only as strong as your weakest vendor security. So it's critical to have them mapped.

 

Kenneth Webb: 

And we've seen some massive supply chain disruptions in the marketplace over the years. The interdependency is both necessary to innovate rapidly, but also can be a severe vulnerability. And certainly, AI tools are kind of introducing us to new ways to think about supply chain considerations, especially utilizing AI itself.

 

You know, lot of people are going brand new into new AI vendors. What should they be looking for from a risk perspective?

 

Kenneth Webb:

So the checklist that you see on the screen is basically crucial for evaluating AI vendors. Beyond traditional security measures, you need to assess how they secure their training data, whether they have controls against multiple poisoning attacks, and how they prevent prompt injection vulnerabilities. And even more important, verify that they are doing adversarial testing of their models. Because some questions might seem technical, but non-technical procurement teams can still ask for documentation of these controls and have security specialists review them

 

On the current AI landscape, we see application providers, model providers, orchestrator service providers, and also cloud service providers, because all of them are intertwined. So this checklist will help to alleviate and also track those possible point of failures that you can have on your AI vendors.

 

Justin Beals: 

Yeah. Certainly the landscape of third-party risk and the issues around it have been a big area of investment from a security perspective. But when I think about third-party risk, one of the first requirements that I remember was that HIPAA, this law has been with us for a long time in the United States, right off the bat kind of had strict regulatory requirements for vendors.

 

Tell me how things like this verification dashboard can change your approach to managing vendors if you're a healthcare organization.

 

Kenneth Webb: 

 

Sure. Well, this dashboard shows how healthcare organizations should track vendor compliance. So beyond general security, healthcare vendors must demonstrate HIPAA compliance, have clear VAAs or business associate agreements, and show specific controls for protected health information or PHI. 

 

So this dashboard also tracks incident response.as you can see in the middle of the dashboard, where you need vendors who can respond with regulatory timeframes if there is a breach. And notice also real-time compliance monitoring, where the vendor status can change between the different assessments that they go through.

 

Justin Beals 

Wow. I know, Strike Graph, where we both work, is HIPAA compliant as a business associate. And I find it really interesting that our customers who are delivering patient healthcare, who needed us to be nearby patient healthcare information, asked for us to meet their same level of good security. I think that speaks to that web of security that you mentioned before.

 

Kenneth Webb:

Exactly, because data is being transferred, accessed, or stored in different points of discipline change. Those HIPAA customers that are requesting us to be BAA compliant, they want to have the same level of confidence that our practices can take care of that PHI the same way they do. So that is the main requirement, if we can call it like that, they will have for us to keep that data protected if it gets into our systems or nearby even.

 

Justin Beals: 

Yeah. Well, Kenneth, I just returned from a big cybersecurity conference, RSA. And of course, the concept of Zero Trust is a big campaign. It continues to be in the security discussion. How does Zero Trust apply to supply chain security?







Kenneth Webb:

 

Well, it fits perfect for supply chain security because it starts with the assumption that no vendor should be inherently trust, right? If you can see that diagram on the screen, you can see how every vendor access request goes through a continuous verification process, checking identity, device security, access privileges, and also behavioral patterns. 

Even after the initial authorization, access is constantly reevaluated. So the beauty of this serotroth architecture is that it works across all three sectors we are discussing. Financial, Healthcare, AI operations. With just small adjustments for each of those industry requirements, you can have a Zero-Trust architecture in place working and protecting you and your data.

 

Justin Beals: 

I always think that zero trust is a little bit more of a philosophy than a set of rules in that it should inform the rules you write. Is that true for you, Kenneth?

 

Yeah, we can see it as a philosophy. Also, we need to take into consideration the fact that on cybersecurity, you need to balance your security measures and also the usability of your products. If we have 100 % zero trust architecture, it can become an overwhelming process for the user because it will need to be re-authenticated every time, here and there, you will need to get your credentials often so it can be a little bit of a pain for using that system. So you put it in the right way. It is a good philosophy that can be applied and that you need to leverage in order to see how deep you want that Zero-Trust to get into your solutions.

 

Justin Beals

Wonderful, Kenneth. Thank you for covering the social engineering security awareness and as well the supply chain security awareness work that we have to do.

 

Kenneth Webb:

Of course, listening anytime.

 

Cloud Security with Josh Bullers

 

Justin Beals:

Josh, thanks for joining us today on SecureTalk to talk about cloud security gaps. Before we dive in, can you tell us a little bit about what you do?

 

Josh Bullers:

Yeah, sure. Thanks, Justin. My name is Josh Bullers. I'm the head of engineering and AI here at StrikeGraph. A lot of my focus is more on the AI side of things, but also have to balance that out with our engineering stack and working closely with our infrastructure team and other engineers on the team to make sure that we're not missing anything else to productize these things.

 

Justin Beals:

Yeah. You and I have worked together for a while, but that entire time that we have worked together, almost every software we've deployed has been to the cloud. Certainly we're both aware, whether it's an on-prem deployment or in a cloud deployment, how easy it is to misconfigure some of these systems. Like we have a lot of opportunity to change how we want the architecture to work or what's unique about our innovation. We have up here this slide that shows how the types of cloud misconfigurations that are possible, how common are these particular types of issues?

 

Josh Bullers:

I mean, cloud misconfigurations are really common, actually. It's really easy to have just a certain piece that kind of drift out of where you expected it. And I think studies are showing that over 80 % of organizations have at least one misconfiguration at any time. So the doors that we're kind of looking at here can represent just a few of the areas that we might see some misconfigurations. There's things like excessive IAM permissions or other areas like that that are really common and easy to do as you roll out new services and you party your product, you end up in a situation where you might over-brand in a certain area or give too many admin permissions somewhere or to someone when you should be locking those things down. You might not be keeping your vulnerabilities upstate. It's really easy to get pinned on a certain package version somewhere, or you might leave, like S3, kind of open a little bit more than anticipated, and things not encrypted there, or certain things along those lines. 

 

So. It's really easy to drift away from that perfect environment you might start with. Every new thing you add is a potential point where you leave something.

 

Justin Beals: 

Yeah, I think of three things that are different from on-prem deployment in a way. One is that certainly the reason that we like the cloud is that it's very fast to provision new hardware. But also that means that we can build a lot of different things very quickly without any necessarily have to be very thoughtful about it.

 

The second is that everyone has some level of access to it. And we didn't normally do that in the data center. It was pretty limited. And then finally, it's always changing. I think that's why these are really complex systems. And they're very dangerous because, of course, we store our software and all of our customer data in these large hyperscalers.

 

Josh Bullers: 

For sure. I do think in that scenario, you have to be a little bit trusting of your vendor as well. Be particular in how you choose your vendor. That's why some of the big players exist is because they have the staff to close those loopholes as best as possible on their side. But yeah, definitely something to keep in mind.




Justin Beals:

Yeah, thanks Josh. It's really helpful. next we wanted to talk a little bit about financial data. It does seem to require a lot of extra attention in the cloud and we know a lot of financial systems have been moving to cloud environments. Tell us about what's so precious about this financial data.

 

Josh Bullers:

Yeah, I mean, there's naturally a little bit more security that needs to be and forethought that needs to come into that situation. It's one thing to accidentally leak somebody's username or something else that's not very sensitive. While we do Demosync sensitive, it's not the same level as somebody's like credit card information or social security number or those other bits and pieces. So if you're in an environment where you are working with that kind of data, you really have to approach it with the thought of like, this matrix, for example, that's showing the roles within an organization and the different access to systems they might have. 

 

Things might be time bound. They might be full access read-only. It really needs to be thought through in terms of what the role needs to do to actually achieve their job duties, not necessarily what they might want access to. I think like looking at the CFO, for example, you might think, just give them access to everything, but they don't actually need access to everything to do their job. They have people that work on their teams that can compile that information and provide it to them when they need it instead of having them lurking in every system, poking at everything and maybe forgetting to shut something off to not copy it to their local. 

 

And I think a classic one are the time-bound one stuff like with auditors or other situations where they might come in, they might need read-only access, but you want to tie that to like a window in time and then shut it off afterwards. So it's not just a door left open as we were discussing before.

 

Justin Beals:

Yeah, having access is very different than the permissions that you receive. that's why we need these types of matrices to understand who should have access for what amount of time. If we don't do that very granularly, then we'll have a security incident where we leak even private data that shouldn't be leaked internally. 

 

Josh Bullers:

For sure, yeah, I mean, there's different levels of that exposure to write, like you mentioned internally, like that's one risk. can drift out of the right hands into the wrong hands, but external exposure is a big risk, too. So you got to keep in mind how quickly it escalates.

 

Justin Beals:

Yeah. So another area that is pretty new to a lot of businesses, but a lot of work is being done in it, is in developing their own AI models by training data and looking at what type of intelligence they can layer in their software. AI models certainly present a unique security challenge in the cloud. How do you think about creating secure environments in which to develop and release them?

 

Josh Bullers: 

Yeah, I mean, it's a tricky one, right? And this image here really gives a good example of like, at a high level, how you might provision that. But then as you get into the weeds, you really got to think about your situation. So in this, you might see like, okay, we've got our training data that's stored in kind of a secure encrypted vault that only gets transferred to locations as needed, maybe for training or read access, different things like that. Think something you have to been mined in that situation too is your training data. should probably have careful thought too about how it's even reaching that point. Like does it have certain PII that shouldn't be in there? Do you want that in your training data? Once it's in a training data store, it's very realistic for it to get trained into the model. And so if you're using something like generative AI now, then something you don't think about is the exposure that it might regurgitate that information somewhere at a point, future point in time, if you train it on that.



So I think that's your first line of defense is know what you're putting in there, right? And then you've got the natural things like that, making sure that you're having things encrypted in transit, making sure that you've got endpoints that are properly secured and ready to access certain models or to kick off training of models. And you really want those segmenting. You don't want somebody to have access across the board unless they are somebody that needs to poke at all those things. But even then you want to make sure that they're like spinning it up, kind of going through some of those situations. 

 

And then the last kind of quick note on that is it's really common to have monitoring and logging around these systems, especially in these kinds of systems where PII may be present at some point in time, whether that's before you anonymize it, before you make synthetic data off of it. You really got to be careful about what you're putting into those systems. It's one thing to log like that you hit a certain point and like, it broke or something like that.

 

It's a whole different to just spit the entire thing out and be like, yeah, this customer's data is here and we're going to use it in this way. And because that's another area then that you may forget it's there, and somebody gets access to the system, and you have a data.

 

Justin Beals: 

Yeah, it feels to me like some of our baseline rules around security planning can just really serve us well here. One is the data. Where does it live? What are we doing with it? Who has access to it? And how do we minimize its exposure secondarily? And no matter what kind of system you're trying to develop from modeling to training, I think if you think about that, and your environment and how you want to deploy it, you'll come up with the best possible solution. 

 

Josh Bullers:

Yep, exactly. 

 

Justin Beals: 

So let's talk a little bit about cloud environments. I think I've got an example on the left here, an example on the right. I think that the one on the left really kind of shows the way we might have used to do things, a little more hodgepodge, and the one on the right maybe a little more comprehensive.

 

What's the important changes to you here, Josh, that you're seeing?

 

Josh Bullers: 

Yeah, I think the concept of segmentation here is the key thing, right? In the before times, as you mentioned, or just in a basic environment, you might have like everything in one environment. And once you're inside of that environment, you can just go across all those boundaries. I think you need to make sure that everything is encrypted, and make sure that there's only access to certain things, make sure that there's IAM policies if you're in like an AWS environment to make sure that people or systems only have access to the data they want. It's really common for somebody to just like give the read all permission in those scenarios when you do really want to segment that down. What is the system doing? What is the person accessing? And then you also in this kind of environment, you want to make sure that there's some kind of monitoring going on. Like, is there something that's outside the norms that maybe you should be flagging?

 

Because in this kind of segmented situation, you kind of see what's normal behavior and what's abnormal behavior. And you can kind of build out those rules then to flag, like this person ventured into an area that we didn't know they had access to or that they shouldn't be in or something like that and kind of flag it, right?

 

Justin Beals: 

Yeah, I also think that, you know, not seeing is not knowing and without a good network diagram, and these are not great ones, but some sort of 50,000-foot view of how you see everything piecing together is critical to being able to operate effectively.

 

Josh Bullers: 

I think there's new tools that it used to be to your point that it was really hard to come up with those network diagrams, but there are new tools to really help keep those up to date so that it's not such a burden to maintain them and that you can keep those more visible, more up to date. Yeah, I think use the right tool for the job in that situation and try to stay on top of it.

 

Justin Beals: 

Josh, I'd be amiss if I didn't throw a curveball at you. Let's talk a little bit about package dependencies and software bill of materials and keeping software up to date. How do you think about that from a security as a head of engineering?



Josh Bullers: 

Yeah, it's one of those things that can get out of hand really quickly, right? Like you have a breaking change in a package that you pin to it and you move on with your life, or you have a Docker version that you're like a system in Docker that you're using just because the dependencies work. And I think that's the dangerous moment right there, right? Like you've got to pause, make sure that there's some kind of tool in your system, whether it is a wholesale.v vulnerability scanner, if it's something purpose built to your Python environment or AWS, as you push images, just something that's flagging those in real time. 

 

You want to stay on top of those vulnerabilities because it's one thing to say, we're ignoring like a low or medium because we just start biding our time to upgrade. But that's a slippery slope into like the critical vulnerability, right? Like suddenly you have a critical and you're like, we just can't unpin because we didn't stay on top of it. And in some cases,

 

You may get by in other cases that may immediately bite you and somebody kind of gets in through something that they shouldn't be in through. And it's all because you didn't stay on top of that kind of upgrade in time. And you've got to stay on top of that tech debt. It's a pain at times, but yeah, you got to carve out just enough time to stay on top of it and upgrade as often.

 

Justin Beals: 

The statistics tell us that staying up to date is safer than staying out of date.

 

Josh  Bullers:

Just seeing the output from that vulnerability scanner should scare you enough when you see like the red or the yellow. It should be an indicator that maybe you should give it a little attention.

 

Justin Beals:

That's right. Well, Josh, thanks so much for joining us and talking about cloud configurations and security and a little software development lifecycle today.

 

Josh Bullers: 

Yeah, thank you.

 

Insider Threats with Elmy Peralta:

 

Justin Beals: Welcome, Elmy. Thanks for joining us today. Just tell us a little bit about what you do at Stratgraph.

 

Elmy Peralta: 

Hi, Justin. Thanks for having me. I am the assessments manager here at StrikeGraph, part of the assessments team.

 

Justin Beals: 

Awesome. So today we're going to talk a little bit about insider threats and hybrid work and a couple of other topics with you that you specialize in. And so up on the screen here, I have a heat map and it shows kind of the varying risk levels across different work environments. You know, what patterns should we notice in this?




Elmy Peralta: 

Yeah, so this heat map helps organizations understand how risk changes across work settings. So if you notice that certain high risk activities in red, like accessing financial systems or handling large data sets, show elevated risk in home and public environments. But also notice some surprises. Some functions actually show lower risk in home environments than in office settings, particularly around certain types of social engineering.

 

So the key takeaway is that security control should be environment-adaptive and automatically applying stronger protections when the context indicates higher risk.

 

Justin Beals: 

I see. So I think it's important that you kind of design the security practice around hybrid workspaces according to the risk situation at your particular business. And of course, when we deal with other types of data, health care professionals specifically, they often need to remote access into patient data, especially in a really distributed workforce in health care. How can this patient data be secured properly?

 

Elmy Peralta:

Yeah, so this layered approach is definitely essential for healthcare remote access. Starting from the outside, we have device verification, ensuring only managed and secured devices connect. Then we have multi-factor authentication, ideally healthcare specific, like biometrics or hardware tokens. Next is context level access control that considers location, time, and access patterns. Then comes data level controls that limit what can be downloaded or shared. And finally, comprehensive activity monitoring.

 

These layers work together to enable the remote access that healthcare professionals need while practicing and protecting sensitive patient information.

 

Justin Beals:

Yeah, and certainly any organization that's touching patient health care information, whether digital or even paper files, are going to be held under the HIPAA requirements. And that's a regulatory requirement in the United States, and there are other ones more globally. And I think that's why it's really important for us to cover that in security awareness training. know, other organizations might be more concerned from a financial risk of fraud or the loss of

money or some other type of financial impact. How does this workflow help us show how multiple people need to be involved in a financial process? Isn't it kind of inefficient to get more than one person involved?

 

Elmy Peralta:

It might seem that way, but segregation of duties is a fundamental control, especially in financial operations. So this workflow shows how different roles handle different parts of a transaction process, ensuring no single employee can initiate, approve, or execute sensitive actions. In the hybrid work environment or remote that we've leaned into more, this becomes even more important because informal oversight decreases when people aren't physically together. So digital workflows should enforce these separations automatically and flag any attempts to circumvent them.

 

Justin Beals:

That's excellent. And we certainly have a lot of digital tools and team tools that we can use to both communicate and operate together in really efficient patterns. Well, this is really great. But certainly as we think about insider threats broadly, we're kind of monitoring what goes on. And I think data loss is probably the thing we're most concerned about in an insider threat situation. This dashboard shows potential data loss incidents. How does this type of monitoring work?

 

Elmy Peralta: 

So this data loss prevention dashboard uses behavioral analytics to identify potential insider threats. It's looking for patterns that deviate from normal behavior like downloading unusual volumes of data, accessing systems at odd hours, or even attempting to exfiltrate information. The key is establishing baselines of normal activity, then identifying the anomalies. In hybrid work, these systems become crucial because traditional physical security observations are no longer possible.

 

Note that effective DLP isn't just about prevention, it's also about early detection when prevention fails.

 

Justin Beals: 

That's excellent, Elmi. And I think you would probably agree with me that a lot of this kind of work comes to the responsibility of an IT manager or an information systems manager where we're dealing with insider threats from a technological perspective or data retention, data loss issues, is that right? 

 

Elmy Peralta:

Yeah, absolutely, Justin.

 

Justin Beals: 

Elmi, thanks so much for joining us today and sharing your expertise on insider threats for our annual security awareness training.

 

Elmy Peralta: 

Thank you for having me.

 

—---

Regulatory Compliance with Micah Spieler:

 

Justin Beals: 

Hi, Micah. Thanks for joining us today. It's a treat to have you helping us out on our security awareness training. Tell us a little bit about what you do.

 

Micah Spieler:

Absolutely, it's great to see you.

 

I'm our chief product officer here at Strike Graph, which means that I really manage our compliance software. working with our team of engineers and product designers and internal assessment experts on building the best GRC solution that we can. Oftentimes that means that we're answering questions for our customers about regulatory changes, shifting landscape. So it's something that I try to stay on top of as best I can and happy to share that with you today

 

Justin Beals: 

Yeah, appreciate that. I think some people might wonder why regulatory or even compliance is a part of security awareness training, but it's a big part of what drives us to do what we do in some ways around security. Is that correct?

 

Micah Spieler:

Yeah, no, absolutely. I always try to instill in folks that compliance is really capturing the receipts of your security program, right? And so I think when we're thinking about being compliant, especially with regulations, right? It's really important to recognize where governments or governing bodies are, where they're looking at in terms of what security looks like, what good security looks like, because honestly, they're trying their best to protect all of us. And so it's important to stay aware.



Justin Beals: 

Yeah, and what they're This we have up the regulatory calendar or some of the major events that we're looking forward to in 2025 and 2026. Maybe not looking forward to. It looks very busy. You know, how are organizations big or small supposed to cope with all these changes? Yeah.

 

Micah Spieler:

Great question. It's definitely overwhelming. You know, we're on the other side of the digital transformation that kind of rocked the world over the last couple of decades. And now companies and regulatory governments are trying to figure out, you know, how to regulate all of this digital information that we're producing out there. So, you know, honestly, we see regulatory expansions across pretty much every industry, you know, in 2025 alone with onset of all the day.

different AI tools that are out there. There's lots of AI regulations starting to roll out. We've got the AI Accountability Act coming in place. HIPAA has expanded laws about how you can keep track of and manage digital health that's coming in August. You've got manufacturing companies being asked to comply with digital security standards by the Department of Defense through, you know, CMMC. So again, it's just, all over the place. And I think, you know, really what we try to help folks understand is how to proactive active motion in place so that they're not caught behind the ball with these, you know, regulations start to really take hold.

 

And oftentimes, what we see in the best sort of like top companies who are being really proactive there is that they've got a team who is paying attention. They've got a good tool that sits with them and that they're really able to stay on top of things like watch for these regulatory changes and be prepared to implement them as they start to come online.

 

Justin Beals: 

Yeah, I know we've used a type of committee approach to the work and it's been really successful. To your point, these are some of the major themes, like major changes that we're seeing globally beyond just things like cybersecurity maturity model certification for DoD contractors to major areas of some continued risk and now added risk on top of that.

 

Micah Spieler:

When you kind of touched on this, Justin, I'm glad you brought up compliance committee because, you know, I do really think that one of the really successful components of our organizational structure. But one thing to note is that the compliance committee is made up of folks from many different departments. Right?. And I wanted to call that out because as you start to look, you know, and explore what these regulatory changes are starting to look like, you they touch every part of the business. It's not just secure is a database or other things like that.

 

Like they touch financial regulations and financial data. They touch customer, you know, like know your customer data, touch marketing, they touch sales. So it's really important. You are able to proactively again, as an organization, think, you know, kind of full business, slice, because it really impact everything will aid everyone. 

 

Justin Beals:

Yeah. I think that all of us come to this type of work thinking there must be an easier way. And intuitively, I think we feel that. But maybe you can confirm that a lot of times something that we do is applicable across multiple regulatory requirements.

 

Micah Spieler: 

Yeah, absolutely. And we've got this Venn diagram up here that kind of demonstrate what we're talking about. But as we're thinking across departments and we're thinking across data structures, oftentimes the best security practices are the same. What type of data you're managing, Secure logins, policies in place for acceptable use or how you access data, all of those things are really important. And so there is significant overlap between any of them both regulatory compliance programs out there as well as some of the like industry specific compliance programs out there. And so oftentimes you can dial in where there are specifics for a really specific industry or really specific regulation, but then otherwise really rely on security and compliance program to cover a lot of overlap in all of these different frameworks.

 

Justin Beals: 

Yeah, even inside a single organization, they may have multiple of these parts of the Venn diagram where they have regulatory impact, you know, both AI and a financial institution or, you know, health care and a financial institution. So I think, you know, part of this is a responsibility of the organization and the people that work there to stay aware of the changes that impact what they do specifically, their professional expertise.



Micah Spieler: 

And I would also offer that you share with your colleagues when those things are happening, right? Because something that might impact you in your line of work might also impact someone else on your team or in a different department. And you can share how you're handling some of those different situations.

 

Justin Beals: 

OK, so now that we know it's philosophically possible to make it easier, what is a way by which we might make it easier so that we understand what we do and how that works with these regulatory requirements?

 

Micah Spieler: 

Yeah, absolutely. So what we recommend for our customers and the folks that we work with is really to take a control center. And what this means, your controls, these are the things that you are committing to put in place to protect your data or to protect your security or to meet regulatory requirements. And so this example we've got up here on the screen demonstrates many different frameworks across many different continents, across both sides of the at Lansake there. Imagine that we are an AI company, you we want to be in compliance with EU standards, the US standards, and ISO standards. So we've adopted, you know, these three different frameworks. 

 

Each of them have similar requirements, although they might be a little bit different. They might be specific in terms of how you can use data or what type of encryption is necessary or other things. What we look at is setting together this security posture that combines your risks, your controls, and then the evidence, your control activity kind of all into one place. 

 

And the important piece here is Justin's talking about, if you make this easier, how do we streamline this work is really that control layer. Cause this control layer can act as the glue that connects, you know, multiple different frameworks, you know, multiple different risks without having to duplicate your work, without having to manage all of these different similar, but you know, regulations in different silos. 

 

So really it is this control layer that, gives the opportunity for you to have one central compliance program rather than thinking about it. And a bunch of individual programs that really sets folks up not only for just easy, easier to implement, but also just a much stronger program because things can help reinforce each other when you think of them holistic.

 

Justin Beals:

Yeah, that's awesome. know that one of the things that is kind of a challenge or that could make things easier is can we automate any of this work, know, the compliance work necessarily. And certainly it's an area where there's been a lot of innovation and investment. Can you talk to us a little bit about how technology is helping us automate some of the compliance process or compliance outcomes.

 

Micah Spieler: 

Absolutely. And, you know, Justin's got the key phrase up on the screen here, continuous compliance. You know, compliance is not a one-and-done activity. And even though your audits be, you know, and truly, you know, in order to actually keep your data safe and to protect your users and your own intellectual property, you know, you need to be continuously monitoring all of your different compliance controls. 

 

And so, you know, what we really look for here is opportunities to automate as much of the as possible, right? If we're able to do that, then the humans, the team, the compliance committee, whatever you want to call it, focus less on sort of what a rote evidence collection, taking screenshots of settings and other stuff like that, and can think more about strategic areas like exception handling, or oversight of an entire program or if there is an issue, what can you do to drill in there? Do you need to implement a new program, a new control of that? And so again, we think of it as this continuous cycle. And as we look at some of the different ways that really maintain that compliance and maintain that monitoring, and thinking about how you can automate as much as the other tasks as possible is really important. 

 

You can build integrations into almost any system now, right? You know, it's important that you do that to get of those systems into a central place where you can monitor to ensure that there aren't any issues and that you are up to date with your controls and that things are operating the way you expect. and get that evidence.

 

Justin Beals: 

Yeah, certainly. I also think that a lot of our vendors like our cloud computing providers or business and IT systems providers have baked in a lot of security features that we don't always take advantage of to execute that continuous compliance to, right? 

 

I can just think of logging and monitoring for, you know, understanding if an outage is happening in our application and having multiple ways that we get alerted, whether it SMS or email, etcetera, as being kind of that continuous monitoring and ensuring that that is running in an effective way. 

 

Justin Beals: Yeah, absolutely. Well, Micah, thanks for joining us for Security Awareness Training 2025. We appreciate it.

 

Absolutely. Great to see you, Justin.

 

—-

 

Security Culture, with Juliett Eck

 

Justin Beals: 

Thanks Juliet for joining us for our 2025 security awareness training. You have our final section today, the human element. Before we dive in, why don't you just tell us a little bit about your background and what you do?.



Juliett Eck:

Yeah, hi Justin. Thanks for having me. I am our CFO and also oversee our operations at StrikeGraph. I oversee human resources, the operations between teams, and also our finances.

 

Justin Beals:

Awesome. So we have up this slide. It's a comparison between something called a security mindset and a concept of rule following. Why is the mindset approach superior in developing a security culture?

 

Juliett Eck:

Yeah, when we think about rule following, usually rules are very specific. They define very rigid constructs with which you're supposed to do or not to. And when we think of a mindset, it's more of an understanding or a broader understanding of the reasoning behind it. And so it provides more flexibility for folks to understand and also be curious about anything that relates to security that might fall outside of a rule, but still be very much a component that needs to be considered as a part of ensuring security.

 

Justin Beals: 

Harder thing to teach the mindset than the rules of course, but very valuable to just to generating that fundamental culture, correct?

 

Juliett Eck:

Yeah, and as an example, you know, if there's a rule that, you know, you're supposed to report a phishing attempt, you might see a phishing attempt and report it. Well, if there's something that happens that maybe it isn't a phishing attempt, but it doesn't look right, and you're curious about it, and you've got a mindset around recognizing and looking for and understanding that there is a security threat, You'll be more inclined to report something or identify something that is a security risk that might not be specifically addressed by a rule that the organization has.

 

Justin Beals: 

One of the challenges I think that a lot of security programs have is that when an incident happens, it can be so dramatic an impact on the business that people get very upset and angry. And that can lead security programs to focus on punishing mistakes. This diagram shows a different approach.

 

Juliett Eck:

Yeah, a fear-based security really fails because it discourages people from bringing those kinds of breaches forward by creating an environment that really celebrates and gives positive reinforcement to the people that identify a security breach or an issue, and celebrates it within the team and rewards it, it really just creates an overall environment where people, one, feel comfortable bringing it forward, two, they feel connected with their teams and they're a part of a larger mission and culture of the company. And it's like, all in this together, and they've just helped keep the organization and the employees within it safe. 

And it's really just a cultural shift from being more of like a security police where people are kind of ensuring that you're doing the right thing all the time to really everyone's a partner in ensuring the success and safety of the security for our organization.

 

Justin Beals:

I mean, definitely the rubber hits the road in not singling out. And I think to the point of this diagram, providing recognition regardless of an outcome of following appropriate behaviors.

 

Juliett Eck:

Yeah, and it's turning something into, for example, if there's fear in the community, at the organization around reporting something, they might sit on it or have time to consider some negative consequences and maybe delay sharing the information. Whereas if you have a culture that really supports the reporting of these issues, it turns into a how fast can I get this reported so that my teammates are aware and we can take care of it as fast as we can? And it's really just shifting the energy from one space to another, which leads to faster results in addressing any security issue that comes up.

 

Justin Beals: Yeah. All right. We're going to take a dose of our own medicine here. And there's been a lot of things to fear and be careful of in security awareness training. Let's talk about a couple of success stories. We have one from finance, healthcare, and AI operations. Can you highlight some of the examples of successful stories?

 

Juliett Eck: 

Yeah, so AI systems are really great at detecting patterns and anomalies to patterns. And so by using AI systems to do that, these patterns can really prevent a lot of different transactions from happening. I think we see a lot of credit card companies and banking institutions using this to kind of detect fraudulent charges. But it's a fantastic way to prevent and detec, you know, different patterns. 

 

In health care when you've got a security mindset, you know, having people that understand the environment, understand how information is used, they can really identify when information is being requested that doesn't seem to fit how information is typically shared. So unauthorized access or unusual access, this is going to be identified by some somebody that has a kind of a curious mindset and a security mindset that where they're going to question things that look unusual. 

 

And then in finance, you know, we do have tools that identify things like I mentioned before, but, you know, having different people also that are familiar with what types of transactions look like, what their budgets are, they have approval processes in place. And these types of just deep understanding of how the business works, and expectations around what they typically see and what they typically don't see. It really gets an opportunity for them to identify things. We see this on our team actually quite a bit where we're catching fraudulent requests for invoices to be paid by our team.

 

Justin Beals: 

And I think one of the truths about all of these is that both the vulnerability and the opportunity for success in security comes down to the people and what we choose to do.

 

Juliett Eck: 

Yeah, it's also a really good, you know, one of the things that culturally supports a good security practice is everyone on the team understanding everybody's role and how they, what their roles and responsibilities at the organization, because having an understanding of the people that interact with other teams helps them also identify, particularly for example, like phishing attempts where people are sending emails, and impersonating other individuals with the team is a really common way that we see security breaches happen.

 

Justin Beals:

Yeah. So as we drive back up to kind of a high-level view of an organization, we have a model in our slides here of maturity around security. What's your advice for organizations that want to advance in their maturity of security operations?

 

Juliett Eck: 

Yeah, I think that it's to focus on first just implementing improvements rather than trying to go for a complete transformation. We all start somewhere. And you can tell by looking at this chart that the reactivity down at the bottom, and I think we can all agree that when you're in a reactive state, it's terribly inefficient. You're spending a lot of time, a lot of resources, a lot of conversations trying to react to situations that, you know, in an inefficient way. 

 

And so if you go to the opposite end of the spectrum here to optimizing, really what you're doing is by having a security mindset and just incorporating that understanding into your decision-making process, into your operational planning, it becomes really efficient and really cost-effective in that way. So that you're basically taking, you know, what is reactive at the back and turning it into more of like a control at the front end. You're controlling for a lot of it in the design of how you manage your operations.

 

Justin Beals:

Yeah, I think certainly this is something that I tell a lot of organizations that are asking about what to do for security is that all of them were already doing some security. So I like your recommendation of incremental approach because oftentimes we need to build on top of what is already working.

 

Juliett Eck:

Yeah, the other recommendation that I would just have for folks that either manage or leadership is that the culture really does start with the leadership team. You know, there can be kind of cultures that say, you know, man, these people, they're imposing all this stuff, like it's slowing us down. But when leadership really leads with this and says, you know what, we're going to take just a little bit more time to consider this aspect at the front end of decision making process and really consider some of these things. It may feel like that is an additional layer of burden to take on, but really that leadership perspective is really saving time and potential reactive or inefficient use of company resources at a later date. And there's just so much opportunity to make a really impactful change in the organization by really kind of shooting for optimization at that high level.

 

Justin Beals:

Wonderful. Juliett, we really appreciate your help and your, of course, hard work at our own organization. But thanks for sharing your perspective on human element as a part of security awareness training. Thanks for all of our listeners and have a wonderful day.

Juliett Eck:

Thanks.



Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.