One of the most daunting tasks for a risk manager is to understand your company’s weaknesses. In psychological terms, we are subject to a cognitive bias known as the illusion of explanatory depth which is an observable phenomenon where people tend to believe they understand something better than they actually do.
Release the machines.
Network scans and penetration testing are the standard fare for vulnerability and risk managers. The National Institute of Standards and Technology Risk Management Framework (NIST-800-53 RMF) puts your vulnerability scanning exercises in the Audit and Accountability (AU) control family, and it is within this area of activity that we most often use to determine the company’s weaknesses.
As a technically savvy individual, network scanning utilities are my personal go-to for a lot of reasons; the most prominent of which is that it feels safe. System and network scan results, typically documented as Common Vulnerabilities and Exposures identifiers (CVE) are produced because of well-documented combinations of ports/protocols and dictionaries of Common Platform Enumerators (CPE).
Technical people love documentation and consistency. Additionally, vulnerabilities that turn up on a scan result are the easiest to deal with because a computer system typically acts and behaves in generally expected ways.
When you work on a remediation plan based on some scan results, you actually feel like you are performing your due care and due diligence; and indeed, you are. It is because of this, an average company will carve out a modest amount of resources scanning the network for technical vulnerabilities, making this a part of their security portfolios, and checking that box. Great work!
Problem in chair, not in computer (picnic).
“We are scanning the network. I guess we are done.
“Whoa, hold your horses,” one would argue. “What about Layer 8 vulnerabilities?”
The silence is deafening.
Layer 8 is a tongue-in-cheek play on the Open Systems Interconnect (OSI) model that abstractly describes the communication pathways for computer networks in seven (7) layers, with Layer 8 referring to the “individual” or person in front of the computer.
Focusing efforts on technological weaknesses leave the most valuable and most vulnerable aspect of the environment unchecked: its people.
In our previous blog post, we described the mind games that are inherent to social engineers. People, unlike computer systems, act unpredictably, sometimes irrationally, and are, by nature, driven by emotion.
In 2021, over 80% of cybersecurity incidents were the result of social engineering, and it is with an organization’s people that we are more likely to find a company’s vulnerabilities.
You can’t really scan people, or can’t you? And how do we address these?
What does a scanner see?
It turns out that we can scan our people to determine the likelihood that one of our own would fall victim to a scam, and it seems like the most efficient method to do this is with simulated phishing campaigns.
With these exercises, security administrators will probe their organization’s users by sending out “safe” phishing emails or attachments linking back to faux login pages with the sole purpose of catching their own people in the act.
Linking back to the company’s directory and some careful planning, security researchers can tell a lot about specific audiences and their respective attitudes, behaviors, and propensity for ignoring warning signs.
What your company is doing to identify potential risks in user education and community culture is arguably just as important as what your company is doing to scan and probe network resources to identify CVEs.
Although not explicitly required for regulatory compliance, simulated phishing is often lumped into ongoing security awareness training which is required by many data protection regulations.
This is likely because unlike installing a patch with a configuration management system to address a computer or system vulnerability, the remediation efforts for a failed phish test equates to end-user training.
NIST, for example, puts these types of exercises into the Awareness and Training control family (AT), more specifically as “Practical Exercises” used as part of literacy or role-based training. Furthermore, simulated phishing is becoming more or less expected by major cybersecurity insurance providers. Is this enough?
Due diligence and horse sense.
One of the major challenges in attempting to address human vulnerabilities and hacking your own people is the taboo that surrounds the practice.
“We don’t have time for this” is a common retort suggesting that there would be a more negative impact on productivity from ignoring friendly phishes than there would be from a compromise due to a malicious phish. OK. There is also a significant fear that user behavior metrics from simulated phishing would somehow be used to shame the users, and in some cases, could traumatize the recipients.
As a security administrator who wants to get the full picture of your company’s weaknesses, you need to phish your own people, but you must provide some assurances that this is a friendly phish; not a punitive effort. And like a fire drill, they are performed for your own safety and for the safety of everyone else.
Explore the world of cybersecurity.
To learn more ways to analyze your organization’s weaknesses, explore the Security Analyst Nanodegree program. Or, get started in cybersecurity by learning the foundations with the Introduction to Cybersecurity Nanodegree program.